I'm sorry. Wasn't it about, I thought it was about an operator? Okay, so welcome. This is Edith and she will give an introduction to Kubernetes operators. Yeah, going in. Okay. Is this working? Is this working? Is this working? Could you hear me, Hie? Yeah? Okay, that's nice. Okay, good morning everyone. I'm really happy to be here again. The last year I was here shaking of nervous because it was my first talk in English. I improved now. It's going to be better, I hope. And thank you so much for this opportunity and all you for also make this happy. So I'm so happy for that. Thank you. Okay, in this talk I will talk about Kubernetes operators. Did you hear about Kubernetes operators before? Could you be? Okay, we will do a little introduction for containers, little, then Kubernetes. We will see how we deploy an application in Kubernetes and then we will do an intro into Kubernetes operators. To start, this is me. You can call me Joseri. I am Tecnology Evangelis at Percona. I also got the UK Global Talent Visa. For that reason I moved to UK the last year. I am a Cloud Native Computing Foundation Ambassador, organizing events in my city in Lima in Peru. I'm Docker captain, also organizing meetups about Docker. And open source contributor who contribute to Apache Airflow in the past. And now just translating the documentation in the Kubernetes website from English to Spanish to make it more for people who speak your Spanish. So if you want to talk about this topic, you can find me there or in the building case. So I'm happy to share with you about this. Okay, I already say the agenda. So today we will talk about Kubernetes operators. But first we will start by containers. Before to talk about Kubernetes operators or Kubernetes, the fundamentals is containers. We all know that a container is a process that is running on top of our system operator. So we need a containerized technology that is on top of our host operating system. And on top of that our application is running as an isolated process. And all the libraries and dependencies that we need for our application is there too. So, you already know what's container, right? Yeah, yeah, this is a container in the room. So that's good. Okay. But with containers, you will see if you are using it, you may be cool experiments on challenge. For example, orchestration, you are not running two or three containers. Now you are running thousands. If you have a big application, right? You are running those applications and you need to orchestrate them. And this is challenge just using containers. Then we have also to secure containers images, managing these vulnerabilities, or we have to handle network securities, access, authentication. No use for one container, but for thousands, thousands of them. With many containers running, also we need to see what is happening in real time. We need to know what container is failing, what we need to restart, and we have this metrics, this data, and we need this kind of tools to help us to visualize these errors and diagnostic, diagnostic, diagnostic, diagnose these problems and improve the performance. Also, we saw that there is a problem with scalability. So we didn't have that kind of tool that helps us to scale faster from thousands to ten, maybe to ten containers, dependent of the demand of our application. And finally, managing a data storage also in a container makes this thing a little bit more complex. What are the advantages of Kubernetes? With Kubernetes, we automate all this process that we talk and we reduce this manual intervention. We don't do this for just one container, but we do it. We see Kubernetes in the world. You see the web page, there is a lot of concepts that we need to learn, a lot of terminology that it looks like we never end to learn Kubernetes. But for this presentation, we are going to focus in three main components, which is the pods, the deployment, and the services. In the case of pods, the pod is a unique basis of Kubernetes. Inside a pod, we have a container and the container is a container that is related with other pods with that shared red and storage. And we have deployments also to deploy our application. It's where we set the desired state and we also set the replicas that is going to have our application. And we have services to access to each pod and to make it available between other pods. For this example, we are going to see visual how we deploy a cluster in Kubernetes. And we are going to see this example of voting app. We have a voting app, which you have a web application on the right where you can just vote between cats and dogs. It's a web application. In the other side, we are going to show you the result of that voting that we have. We have two web applications here. But behind this, we have many things to run in a cluster, no use to applications. We have this. If we have to containerize our applications, we don't just have the front end or the part that is visible to the user, which is voting app, a RESTful app. We have behind also the REST application, for example, to get the data that we put in the web application in temporal memory. And we have the back end that is made in .NET. This is just an example. And we have the database in Postgres, which is also in a container. And the RESTful app, the application which is made in Node, is going to get the data from the database and is going to expose our result of what we want. In the voting application. Here, if you want to create a cluster in Kubernetes, we need to identify which the connection ports. For example, if you want to access to the voting application where we are going to vote between cats and dogs, we need to identify how we are going to access to that application. So the port 8 should be open. The same for the other, could be other port, but this is just an example. But in the other application, RESTful app, also we can open a port so the user can access to that application to get the data. But inside this stack, also we have to identify which application access to what application. In that case, in the case of RADIS, for example, the voting application is going to save the data in RADIS. And the backing application is going to accept the RADIS to get the data and process it. After that, bring it to the database, but the database have to have a port open. The same with RADIS. Have to open a port to make it accessible for other applications. Now we talk about services. So after you identify the port where the application is going to listen, we have to identify services. And the good thing before to do this is ask you, okay, who application is going to access to me, RADIS, for example, okay. I'm going to be useful for another application. Yes, so I need to create a service there. The same with the application of the database with Postgres. I need to create a service. And the same with other ports in the top, we need to create a service for that to be accessed for the user. Now we talk about deployments. After knowing what ports we are going to open, what are our containers, our EMAJs, we need to deploy our application. For our application, for to deploy our application, we just need to have to deploy our application the one we want. How many we want, the replicas we want. If it's one, in this case, I have the voting app and the Resulapp application deployed with replica street, if you can see. And I decide the part which is the control plane, which is the brain of the cluster. And we can access to the control plane through the user interface and also with Qubesetel. Type in comments, we can access to that. And we have the API server, which is the front end for Kubernetes for the control plane. And it recites a lot of, it recites responses and will process them and will save these interactions into the database of Kubernetes, which is ETCD. We have also the schedule. The schedule is going to assign the ports in the cluster nodes, but it's going to try to find what node is right for the port that I have. We have also control manager. The control manager is going to have several containers for the objects that we have in Kubernetes to monitor all the health of the cluster. And it's going to keep informing about the healthy of the cluster. And we have the ETCD, which is the database of Kubernetes. In the other side, in the worker side, we have three components. Qubelet, we have the container runtime, and we have Qubet proxy. For Qubelet, it will take control of the instructions and will start, stop, monitor the containers. We have the container runtime that could be Docker or could be another container runtime to create and to make possible the containers. And we have Qubet proxy, which is going to facilitate the need working communication between the ports that we have in the worker node. Okay, let's go for Kubernetes operators now. Okay, we saw the Jamel. We saw where we identified replicas, one we can scale. RAN or scaling applications, stateless applications, is easy. We saw that because we can just write this command and make Qubeset L scale, deploy to four, scale or level up this one. So it's easy to scale these kind of applications because Kubernetes was made for stateless applications, so it's easy to make. But how about applications that store data? Do you think it's easy to add it into a scale? Yeah, it's easy because we also saw that we have Postgres in a container and in a port, and we can define Jamel file and we can scale it also. That is good, it's easy too, but where is the problem where we want to run a stateful applications, for example, in Kubernetes? This is the real problem. When we want to run a database over the time, because we can deploy a database in Kubernetes, we can make it. Is there running? I did it, many of you did it, but what happened with a long time? To all the life cycle of a database, this is where the real problem came up because we have to handle backups, upgrades, recovery, replications, and many other things that is for the database itself. Kubernetes at the beginning was built for a stateless application, but when we talk about databases or applications that need to save the user data, we need to do another thing. When we talk about running a database in a long period of time, we are talking about the day two of the Kubernetes application life cycle. If we talk about day zero and day one, we are talking about planning and development, operations, and now. This is an example of a custom resource definition where we are going to set up a new kind of our object in Kubernetes. We saw objects in Kubernetes, we saw the default objects, we have deployments, we have ports, we have services, these are the default objects that Kubernetes have. But if we want to integrate a new one, my old one, for example, called Krontap, I have to define it in these custom resource definitions. And I have to also define the behavior of this new kind. What is going to do? What is going to be the main target of this new new kind? So if we see, we define it in that part in this custom resource definition, custom because I am customizing this file to add a new kind and integrate it into Kubernetes cluster. And this is the custom resource definition behavior. For example, this is a very simple one where I am defining the kind. The main target of this new object is going to use that image in five minutes. It's going to do something in each five minutes. This is the object that I want my new type of Kubernetes I wanted to make. But you can have bigger things, right? Make a deploy, make a backup, make a replication. So it's going to be huge, not just as simple as this. And after you apply this new custom resource definition, you are able to do kubectl get Krontap, like a normal object that we have in Kubernetes. And we will see the results. When this is made, you are able to use this new type of Kubernetes in all your Kubernetes cluster. Okay, we talk about the custom resource definition. Now we will talk about the custom controller. So the custom controller that Kubernetes has by default is easy as a concept. So this controller is going to try to find the difference between the desired state and the current state of our cluster and is going to try to match the difference. And the custom controller that we create for the Kubernetes operators is kind similar. Summarizing, how does it work with Kubernetes with all operators? So we have a user which is going to write a jammel from an object that we already know that is deployment. And the deployment after applying the deployment, the object, okay, after applying the deployment, the deployment is going to create the deploy that we want and the post that we want. In all this process, we have the custom control or custom control loop that is going to try to find the difference between my cluster, this is my cluster, and the desired state, and it's going to try to match. Okay, you are telling me that I have two, three ports, but I have just two ports, so it's going to take action to make this automatically. How is Kubernetes with operators? To run Kubernetes with operators, we need to install in our cluster two things. The operator lifecycle manager, which is going to handle and see all the lifecycle of our Kubernetes operators, which is very important. And the custom resource definition that we created at the beginning as a template and the controller that is going to do that matching between the desired state and the final state, but for our custom resource definition. In this case, the user writes a custom resource, but with a new type. It's not a type that we know, it's a type that we created. My app, for example, is a type that we created as a custom resource definition and we are applying it into the cluster. Inside the cluster, we also have the normal object that Kubernetes has, the control loop, to make this, to make, to keep looking, observing, looking for difference and act to try to match this into my cluster. And if you are looking how to create a Kubernetes operator by yourself, you can do it. There is a very nice guide in which is the operator framework where it's going to give you an SDK to start to create Kubernetes operators and all the steps to make it possible. And you also, if you're wondering where I can find all these Kubernetes operators, if this is your first time listening about this, we have, there is also a Kubernetes operators, which is called operator hub, similar to Docker hub, where we have all the images there, right, to access. So we have operator hub where you can find all the operators that we want. So you can use these operators to integrate in things that maybe your application in a Kubernetes cluster is not doing right now. Maybe something missing, maybe there is something that you are still doing something manual in your Kubernetes cluster. If you are doing something manual, maybe it's easy to automate. You can also find this kind of tools here to automate and make it even more efficient your cluster, your application running in Kubernetes. And as any application, it also have maturity, capability models. The Kubernetes operators has five maturity levels. The level one, which is, which is has, I cannot see, which is, is going to see just the installation and the level two, the actualization and convenience. The level three is going to see the complete life cycle of the application of our operator. The level four is going to be more deeper and the last level is going to be completely automatic. Our operator is going to be doing a lot of things. So the application that we run in Kubernetes operators have to be in this order, passing this capability, capability models. There is many Kubernetes operators that are already in the level four and some maybe arriving to the level five. But you, if you go to that, if you go and see, for example, this is a Kubernetes operator for MySQL based on PerconyStraVic cluster. You will see in all this kind of operators the capability levels they are like, they, they are, they have. So it, it means that they have a certain maturity building and you can, you, you can see that you can use it also. Okay. If you are wondering how you can work with Kubernetes operators in Percony, we have Kubernetes operators, which is completely open source. We have Percony Kubernetes operator for databases, MySQL, MongoDB and Postgres. Feel free to use, use it also collaborate because it's open source. So if you are also, if you see that this is very complex to use it, you don't want to handle a lot of jammels, scripts and many things. We have also Everest, Percony Everest, which you can use it with a graphical interface. And you can start to create your Kubernetes, your Kubernetes, you can create your database Kubernetes in, in clusters of, of Kubernetes. Yeah. And if you have questions, you can reach us there also in our community forum. And we are having a Rafa in building K after here or maybe in the day or tomorrow. The raffle is going to be tomorrow at 2pm. We are raffling this Lego. So feel free to go, scan the code. Good luck. So that's, that's all. Thank you so much. Yeah. Yeah. If you. If you. If you. If you. Hello. I'll just try and speak over the noise. And. Good presentation. Oh, yeah. Could you hear me? Yeah. Could you hear me? Could you hear me? Hello. Yeah. Could you hear me? No, it's not. Yeah, yeah, yeah. I want to talk, but this is not working. I think. Yeah. Hello, hello. Hi, hi, hi. Could you bring me the mic? I will answer the question. I think you'll get just kind of you. Yeah, yeah, yeah. Thank you. Thank you. So we finished, right? I mean, there is. Yeah, sorry. Who made the question? Sorry. Yeah. Yes. Sorry. Yeah, sorry. The mic is not working. Who made the question? Right? Yeah. Yeah. I use, I use Kamba. But for the, yeah. I use Kamba. Is that like CMB? C-A-N-B-A. Yeah, I'll try that. And then for that arrow, I use a photo shoot. Oh, photo shoot? To make your, to make it moving. Oh, okay, okay. And then I import it into Kamba and I do that. We need the mic. Thank you. Oh, it's complicated. Thank you. Sorry. Alex. This is for you. All right. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Sorry. Thanks.