Thank you so much guys. So is my audio, can everyone hear my voice? Perfect. So I would be talking about external Rooksef cluster. So people might not be aware like what's Rook, what's Sef and what thing I'm talking about. So I will tell you to the intro first about what topic is this and then we'll deep dive into this topic. So I am Parth Arora and I work for IBM Storage as a software developer and closely with Rook operator and I'm one of the lead approver of that repo. So anybody like might be storing data applications, creating applications through Kubernetes, but everyone knows like Kubernetes is designed in terms like which doesn't talks much about data but anyhow any application we can talk about there is storage for that and we need to store data in it. And for now if someone wants to store the data that is coming from Kubernetes application, we just talk about cloud providers and talking to the cloud and they might be doing anything we don't know at the back end. So why not to bring the storage into your data center and traditionally like there are some limitations also with cloud providers like number of nodes, how we can scale up the different AZs. So why not to have a native solution with your own Kubernetes cluster. So and also like to manage that cluster how natively your Kubernetes application manages and also manage that storage in the same way. So here comes the most trusted platform CIF that can like make the storage available to you in the form of Kubernetes, how it will be but first I will talk about CIF, why CIF. So it's like it has a lot of enterprise already enterprises using from like trusted platform from past 10 years and it provides all the storage at the same time the block file and object you name it and it provides and its open source and it has so many features its resilient, it's configurable, it's more disaster proven and it's consistent and provide like like give your data a safe point of view. And so CIF was designed even back when Kubernetes was introduced. So to bring the storage to the Kubernetes world we have designed Rook and this is how it was born to bring the CIF to the Kubernetes. It's the management layer so it helps in installing and managing the state. It's not just like it's there are Python scripts that just install it but it actually manage the state of it. How it does it's a Kubernetes operator which works how a Kubernetes actual desired and actual state works and we can manage to that and how we can define what we need to give the state so that's the CRDs. We can give any definition to that and that thing can be configured and like we can give I need monitoring, I need the block storage so these types of things we can provide through CRDs. And so this is how the architecture of Rook CIF looks like, Rook is the management layer which installs CIFs, the CIFC driver which helps in mounting the port, like the storage to your application port, how it's the native Kubernetes we have CIFC or CIF and CIF being the data layer. And Rook works in two mode, so first is what I talked about bringing the storage into your application cluster only, the same cluster that would be the Rook Converse mode. It's recommended if like we just have one to the single cluster but when we need external cluster that I would be talking about. So first of all what an external cluster is. So external cluster means you already have a CIF cluster installed in which like there are CIF domains, these are the CIF terminologies, the mon, OSTs, RGWMDS, these are some domains running that helps your storage to like store the data. So it's already been there and now comes the part like I'm in Kubernetes world, I need to connect this CIF to the Kubernetes, so how will you do that? So there would be a separate Kubernetes cluster running in which there would be a Rook operator and this FCS earlier and we need to provide data of external CIF cluster to Kubernetes, so how that magic does, so I will be taking to that part. So first of all I will tell why you need external cluster, in what cases like I told you like you can have this same cluster in your native application, so why you need external cluster. So you might be having a big enterprise where you have like different domains of Kubernetes cluster running, like there is an admin cluster that is there and there is a finance cluster department Kubernetes cluster there and you need isolation between them, they shouldn't talk with each other and underline storage you want the same, so you can make use of external cluster in that way. As you see like there is a standalone external CIF cluster that has been connected to different Kubernetes cluster and have isolation of what kind of data is stored and only access that data only, that is internal algorithm that maps the secret and puts your data safe. So this is how you can, like this is one use case when you can use external CIF cluster. So you see like for example you are native, you are already using CIF but you want to come into the Kubernetes world, so that would be one of the use case and the third would be like if you need complete isolation of data, like someone wants to keep the data totally separate, so then you can use that. So but external cluster provides all the three types of data, block file object, there is no restriction on that, so you can make use of that. So this is why external CIF cluster now comes apart how we can install it and how it internally works. Like I showed you like in the diagram we have to grab the information from the standalone cluster and give it to the Kubernetes cluster, so this is how it's been done. So there is a provider cluster that standalone where demons are running, so cluster admin will come, it will scrape the data from it and once scraping the data it will provide it to the consumer cluster and this is the Kubernetes cluster that is running. So that data it will create some secrets config map and after we have the details what all it has been first, like it will give the IP address and other details that we needed and once it's done it will give it to the Roku operator and Roku operator will perform in the external mode this time and will create the Roku resources and the storage class and then we can create PVCs and mount that PVCs to your application ports. So this is the working and how this scraping is done, so there is a Python script we have to give some CLI flags to it and this should be like user defined what kind of flags they are, like it's RBD data pool name or RGB endpoint, so if we give like the specific name of the pool then we run some self commands with this Python script and it will fetch the data. Like the monitoring endpoints and other things and give it to the Kubernetes cluster and once we have the JSON data then we run the import script for it to import it to the Kubernetes cluster and deploy the manifest, these are the CRDs, the definitions, how we want to configure our external cluster. So I will be showing this into them in the demo how this works. And once it's done then we will verify the connection, like we have the cluster running and so how we can check it, we can just get the self cluster and we can see it's in the connected and the health okay state. And then goes like we can go and create the storage class, pool and create PVC, on depending like what kind of storage you need, you need Cephaphase or RBD or RGW. And now comes the time for the demo. Everything breaks so I have recorded the demo. So is it visible? So this is the standalone self cluster where we can see the health is okay and now there are some pools already been there, RBD pools, so these are some self native commands that we can run by this Cephaphase, we list these Cephaphase file system pools. Now there is a Python script, I'm giving using the CLI flags and you can see I run the Python script with some flags and I got the exported data. So this was like external cluster, I export this data, then I will take this to the Kubernetes cluster that's the second cluster, we'll copy this and so in this terminal like there was the Minicube running and I have exported it and now I will run the import script that will use this exported values and create the config and I have to see grids for the Kubernetes. And after that I have to install the manifest, the CRDs, the definitions, I am using the example folder of Rook in which there are already defined some configurations so I am using that only for now. So there would be CRDs.ML, there would be common.ML which have the Rbex, the specific permissions to it, what permissions I need to give and the operator.ML, the Rook operator file and the cluster external, this is the Ceph extender cluster that I will create in Rook, in the Kubernetes side and there is also common external which creates the separate namespace if we want to keep the Ceph cluster in different namespace. So in this I am using a separate namespace for the Ceph cluster and for the Kubernetes cluster I am using the separate namespace in which the Rook operator resides. And now I am checking, I have created all the manifest, now I am waiting like to get the Ceph cluster up and running. So the Ceph cluster is created and so this is the Ceph cluster, internals, YAML if you want to see and now I will describe it and you can see like it is ready. Now the main thing is I have to wait for the Rook operator to get started. So this is how all the parts that will be get started, there is a Rook operator, there are some Ceph CSI plugins, demons that will use to mount the data and yeah that's all. And other demons are already running in external Ceph cluster, the Mons, OSDs in which our data will resides. So the connection is good what I have shown in previous slides, now I have created the storage class, the storage class will be also be created, we can create some others if we needed. Now in this demo I will tell you like how we can create a RBDPVC and store data and how it's disaster proven or how it's how Ceph internally replicates and make our data safe. So now the demo starts for that. So once we have the storage class the Ceph RBD or the Rook Ceph block these two belongs to the block pool and now I am creating this MySQL example in which it will create a MySQL pod and a PVC and a service to it. So what I will do is making use of that Ceph RBD storage class, I will create this PVC and mount this PVC to the MySQL pod and there is a service if you want to see that MySQL service outside the mini-cube you can make use of that service. So the PVC is created, you have the pod, I will wait for the pod, PVC is bounced it and pod is still getting created, the word press MySQL and the service is created. Once the pod is created, it's up and running. Now I will go inside the pod and I will go to the mount path that I have written and I will create a file in it. So this is actual application for example MySQL that might be your actual application there and what I am doing right now here. So this is I am telling like if you want to use a node pod service like this is something that is needed if you want to see are like I am on mini-cube if you want to see the cluster outside whatever you can use this node pod service but the main thing is the data so focusing on that. So I am inside the pod using this command, escape command. So I am inside it now I will go to a certain path that's the mount pod where the PVC is been mounted that is where so while creating the pod I have given this path so I am using that only and now in this part where PVC will write the data so I am creating a demo dot TXT where I will say I will be persisted because of CIF replication and now I will go so this is inside the mount pod and I will go and delete this pod. So this is file has been created I mean so it the pod now I will delete the pod for example there is a disaster this pod has been deleted I will delete this and as it is a deployment it will create again in some other node as the pod is deleted and now I can see the pod is recreated by Kubernetes it is 9 second ago and I will go inside it again to the exact path the CD where live mySQL and I can see that that file should re-exist there where live mySQL and if I cat it so there is still demo dot TXT even if the pod was gone but the file was still exist you can see that and that's all with the demo and quickly back to the slides so these are some new features we have added the RIDOS namespace the IPv6 support and the RGW multisite and we are also going to add some new more features the replica one and support for the topology awareness and also to improve the documentation and these are some community links and thanks for that this is my LinkedIn logo if you want someone wants to connect and yeah thanks for that any questions anyone have what credentials do you need to abstract after get all the information from the subcluster do you need to be admin user or can you also be a regular user so if you don't want to expose adding credentials to a user so can you do actually do this as a user with enough self to make an export of all these information and bring it into your own okay so the question is what all permissions we need when we actually the export the data the client admin is there like which actually gets the data and give it to the Kubernetes cluster so what all permissions we needed so we keep the permissions minimum we can give the admin key ring but we just give the minimum permissions to it to just read and write because we don't allow ROOP to create anything on this subcluster we just allow to read the data and manage it so that's the minimum permissions that was required to CFCSI so we expose that only anyone else okay I guess thanks thank you all