[00:00.000 --> 00:12.700] Okay, I think we can start already. Hi everybody, I'm Diego Barreiro. I'm one of the open source [00:12.700 --> 00:16.300] contributors to the MIT App Inventor project and today I'm going to be talking about App [00:16.300 --> 00:21.940] Inventor and how we can introduce artificial intelligence to kids using this platform. [00:21.940 --> 00:25.760] But before getting into it, I would like to introduce myself a little bit more. I first [00:25.760 --> 00:31.840] started coding with App Inventor when I was 14 years old. It was 2013 at that time and [00:31.840 --> 00:36.080] I basically wanted to build an app. I didn't know anything about coding and a high school [00:36.080 --> 00:40.160] teacher showed me this amazing platform. So I just spent the next couple of years building [00:40.160 --> 00:45.360] up with App Inventor and eventually I switched it to Java coding and I was able to contribute [00:45.360 --> 00:51.920] to the project later on as I'm doing right now. So what is MIT App Inventor? MIT App [00:51.920 --> 00:56.920] Inventor is an online platform that enables anybody to build any kind of application without [00:56.920 --> 01:01.000] having to learn any programming language like Java or coding that is nowadays the most popular [01:01.000 --> 01:07.120] ones. This is the interface and it has the mock font on the center and on the left side [01:07.120 --> 01:10.920] we can have the components, the elements that will make the app like buttons, labels, text [01:10.920 --> 01:16.080] boxes, text areas, like any kind of interaction that the user will have with the app. And [01:16.080 --> 01:21.600] then we can customize the properties like colors, text fonts, text sizes, whatever on [01:21.600 --> 01:27.320] this panel so we can make the app look as we wish for the final user. And how does the [01:27.320 --> 01:33.440] logic work? Well, most of you may know about Scratch. App Inventor works somehow like Scratch [01:33.440 --> 01:38.120] using this block language. So let's say that we want to play a sound when the app opens. [01:38.120 --> 01:43.640] We will be using a block that says when the screen one has opened, we want to play a specific [01:43.640 --> 01:50.520] sound later on. And that's how we can just make any kind of application. MIT App Inventor [01:50.520 --> 01:56.160] allows existing Android developers and Android developers to introduce new components using [01:56.160 --> 02:00.680] extensions. And we will be using today one of those extensions that was developed by a [02:00.680 --> 02:06.800] research project at MIT that enables to classify images on different groups using artificial [02:06.800 --> 02:13.560] intelligence. And to give some numbers of App Inventor, it was tested in 2008 as a Google [02:13.560 --> 02:20.360] project. And then a few years later it eventually was transferred to MIT. Right now it has gathered [02:20.360 --> 02:25.280] over 18 million users since it was created, since it was transferred to MIT with nearly [02:25.280 --> 02:30.760] 85 million apps that have been developed. And on a monthly basis we get roughly around [02:30.760 --> 02:38.120] one million users. And in terms of open source contributions, we have seen 164 different contributors [02:38.120 --> 02:45.960] to the project on GitHub. So today I'm not going to be giving the classic [02:45.960 --> 02:49.320] talk. I'm going to be showing a tutorial and people in the audience and at home can just [02:49.320 --> 02:54.560] follow this tutorial by visiting the following link on how to build an app. And what we will [02:54.560 --> 02:59.160] be doing today is building the Pickaboo app. Pickaboo is a game that is usually played [02:59.160 --> 03:03.720] with babies that when you see the baby, the baby loves and when you hide yourself, it [03:03.720 --> 03:11.360] just cries. So to show the final result, this will be the final result. Let me just switch [03:11.360 --> 03:17.160] to my phone. I'm going to be mirroring this phone. So I open the final app and I'm going [03:17.160 --> 03:23.600] to start using the camera. I can see, here I am. I'm looking at the baby that is happy. [03:23.600 --> 03:30.720] If I hide myself, it just cries. So let's get into it. [03:30.720 --> 03:35.280] So how we can use MIT App Inventor 2. The standard instance of App Inventor is hosted [03:35.280 --> 03:40.880] at aiu.appinventor.mit.eru, but that requires an account. So MIT has created this specific [03:40.880 --> 03:46.680] instance called.appinventor.mit that allows you to create these projects without any existing [03:46.680 --> 03:54.240] account. We will be using an anonymous account so that can be cleaned up later after finishing. [03:54.240 --> 03:58.960] And we will receive a code like the following one. Now this is blurred, but you will be able [03:58.960 --> 04:02.920] to see a code. And on the previous screen, if you want to recover the project later on, [04:02.920 --> 04:09.360] you can just paste the code that you previously get right here. And once that is done, you [04:09.360 --> 04:13.040] will be able to access an anonymous account as you can see over here and you can just [04:13.040 --> 04:19.320] start creating the app. So let's get into it. [04:19.320 --> 04:24.920] So we can visit fosdm23.appinventor.mit.eru and we can click on this link. This link [04:24.920 --> 04:29.600] is basically the code App Inventor instance and we are loading a template project for [04:29.600 --> 04:36.400] the pickable project. So I'm going to click on this one. So I click on continue without [04:36.400 --> 04:43.160] an account. And just wait a few seconds for the project to download from the repository [04:43.160 --> 04:49.560] and here it is. That was faster than last night. I can see the code here so I can just [04:49.560 --> 04:55.520] copy paste the code to access this instance later on. And as this is a tutorial, we can [04:55.520 --> 05:00.280] see on the left side that we are seeing a description of what we are trying to build [05:00.280 --> 05:06.080] with a detailed step by step guide. And this is the instance of the project. I can see [05:06.080 --> 05:15.000] the happy baby and then the sad babies hidden here. Okay, so let me just continue the presentation. [05:15.000 --> 05:19.280] The next step is turning the classifier. I'm not going to get too deep into the machine [05:19.280 --> 05:24.240] learning and how it works. I'm just going to be providing a very high level overview [05:24.240 --> 05:29.840] of how this works. So we will be using an image classification system that consists in creating [05:29.840 --> 05:35.920] two groups of images. We will be creating one group of images that is the face of [05:35.920 --> 05:40.400] myself looking at the baby and another group of images that is me hiding from the baby [05:40.400 --> 05:49.400] so we can show the sad face. So how does this work? We can visit this website, classifier.appinventor.mit.edu [05:49.400 --> 05:55.040] to train this model. Just as a side note, this instance only needs internet to load [05:55.040 --> 05:59.960] once. To train the model, that all happens in your browser so no servers are involved, [05:59.960 --> 06:05.040] no images are transferred outside your desktop. And this website is also open source so you [06:05.040 --> 06:12.320] can just check. There is the link on the FOSDEM23 website. So we visit the website and we start [06:12.320 --> 06:17.040] first creating the images group. So in this case we will be creating one image group for me so I'm [06:17.040 --> 06:22.240] looking at the baby and not me I'm hiding from the baby. If for example we are in a biology class [06:22.240 --> 06:27.360] and we want to classify trees, we will be creating one group of images for each kind of tree so we [06:27.360 --> 06:34.240] can recognize them later on. Then we will turn on the camera to take a photo of myself setting [06:34.240 --> 06:39.440] the group of images that I'm going to be saving this. So if I'm looking at the camera, it's going [06:39.440 --> 06:45.040] to be the not me group. If I'm looking at the camera, it's going to be the me group. This is the same [06:45.040 --> 06:49.840] for the not me. And once that is done, when we have a reasonable amount of images for each group, [06:49.840 --> 06:54.560] which should be around 5 or 10 images for each group, we can train the model. As again, this [06:54.560 --> 07:01.040] training happens in your computer so no images transferred outside of your computer. We can [07:01.040 --> 07:05.520] then test the model to with new images to make sure that we have properly trained the model and [07:05.520 --> 07:11.680] can identify ourselves. And once that is done, we can export the model and load it into App Inventor. [07:11.680 --> 07:19.120] So I'm going to be doing that very quickly. Go to foster203appinventor.mit.edu. I open the [07:19.120 --> 07:25.760] classifier instance. It's going to ask for permission to use the webcam. There it is, [07:25.760 --> 07:32.320] I just accepted it before. So the first step is creating the labels. First the me label, enter, [07:32.320 --> 07:58.720] and next the not me label. And well, the light is quite hard. I think it was... [08:03.040 --> 08:07.680] Take a few more images of me like looking to different places so I can train better the model. [08:11.440 --> 08:17.440] One more, one more. So seven images should be fine for this demo to not take too much time. [08:18.160 --> 08:24.880] And now for the not me, I'm going to take like one photo of me not being there, basically. [08:24.880 --> 08:30.640] I'm going to be using the right hand to hide myself so I can put this one in front of my eyes, [08:30.640 --> 08:40.160] turn it upside down, diagonal, like more images, the other hand as well, like that. [08:42.000 --> 08:46.640] So that should be enough for now. And once that is done, we can just hit the train button [08:46.640 --> 08:52.160] to train this model. It will be built on your local machine without sending anywhere. [08:52.160 --> 08:59.600] Okay, this was faster last night. [09:03.040 --> 09:07.680] Seems like the time that we saved from the loading the project, we lost it here. So now it's training. [09:12.000 --> 09:16.080] Yeah, it's my laptop. So this is a React app that has been open sourced and the only internet [09:16.080 --> 09:20.320] required is to just get the initial webpage later on. We can just disconnect and it will [09:20.320 --> 09:25.520] work perfectly. It's just offline training. If you really want to build it fully offline, [09:25.520 --> 09:31.360] you can just launch the React app locally and it will work. So now the model is built and to [09:31.360 --> 09:38.080] test it, I'm going to be looking at the camera as I did before. I captured the image and I can see [09:38.080 --> 09:44.800] that there is a 99.42 confidence that I'm looking at the camera. If I take a photo of myself hiding [09:44.800 --> 09:53.600] from it, there is a 99.33 confidence that I'm not looking at the baby. So once we have validated [09:53.600 --> 10:01.920] the model properly, we can export it to the app. And we will get this model.mdl file for AppInventor. [10:03.840 --> 10:12.320] So let's go to the presentation. And once we have the model, it's time to code the app using [10:12.320 --> 10:17.120] blocks. I'm not going to go through the slides anymore for people at home. If you have internet [10:17.120 --> 10:21.280] problems or the streaming is down, feel free to follow the slides. It's a step-by-step guide. [10:21.840 --> 10:28.160] But for here, I'm going to be showing the tutorial live. So let's go back to the [10:28.160 --> 10:33.040] project that we just loaded. And right here, we can see a quick description of the project of what [10:33.040 --> 10:39.520] we are going to do. A set up of the computer of how to connect to the MIT instance to the app. [10:39.520 --> 10:46.880] I'm going to show that at the end of the presentation. And we have here the pickable example. [10:47.520 --> 10:54.320] This is the final result. This is one of the MIT curriculum developers that made the original [10:54.320 --> 11:00.560] tutorial. And yeah, basically it says that we will be using the personally much classified [11:00.560 --> 11:08.080] extension that was developed by that research of MIT. And the first step is loading, turning the [11:08.080 --> 11:13.360] model. And then we have to upload the model. To upload the model, we go to this section over here [11:13.360 --> 11:21.920] on the media. We select the just downloaded model file. It should be here. And now it's [11:21.920 --> 11:28.880] uploaded. It's in the asset file of the app. And we can just change the model of the image [11:28.880 --> 11:36.880] property, personal image classifier to the now loaded new model. And we have just loaded the [11:36.880 --> 11:42.080] model properly. To give an overview of how the app is going to look like, there is going to be [11:42.080 --> 11:46.800] this status label that will tell the user when the app is ready to work. Now it says loading, [11:46.800 --> 11:51.360] because it's the initial state. It will let it go to the ready state and it will just identify [11:51.360 --> 11:58.720] the faces. We have these two bars that will be showing which percentage of confidence we are [11:58.720 --> 12:04.640] that we are looking at the baby or not. This is going to be the live image from the camera that [12:04.640 --> 12:09.680] I just showed before. And these are the interaction buttons to start the classification, [12:09.680 --> 12:14.000] to toggle the camera from the front to the back camera. And we have here the happy baby in this [12:14.000 --> 12:23.120] case. So uploading the turning model. This is the sequence of events that I was just talking about. [12:24.320 --> 12:28.320] First we start the app. The app will show to ready as soon as the classifier is ready to [12:28.320 --> 12:33.440] start working with the app. The user will press the start button and then the personal image [12:33.440 --> 12:39.120] classifier extension will keep classifying the live stream video from the camera continuously. [12:39.760 --> 12:44.560] And once they have identified the result, if there is a higher confidence that the me [12:45.920 --> 12:51.200] model is working, we will show the smiley face, otherwise the baby will just start crying. [12:54.160 --> 13:00.080] So in this template, there is already a set of blogs that are available to speed up the process. [13:00.080 --> 13:05.600] And we can go here that we see the one personal image classifier error. So that means that if for [13:05.600 --> 13:10.400] any reason the personal image classifier shows an error, maybe because there are some missing [13:10.400 --> 13:15.440] things on the phone or whatever, we will set the status label text to the actual error that we [13:15.440 --> 13:21.440] return from the image classifier. Once the image classifier is ready, we will enable the start [13:21.440 --> 13:27.680] button as well as the toggle camera button. We will set the status label text to be ready so the [13:27.680 --> 13:33.200] user knows that they can start using the app. And we will set the text boxes of each classification [13:33.200 --> 13:39.920] group to the previously defined labels me and not me in this case. If the user presses the toggle [13:39.920 --> 13:45.360] camera button, we will be changing from the front to the back camera just every time that they press [13:45.360 --> 13:53.040] so we can use the front selfie camera or the back normal camera. And once the user presses the [13:53.040 --> 13:59.840] start button, if the personal image classifier is already classifying an image, we will just [13:59.840 --> 14:04.960] stop it and we will show the start button with the start text. Otherwise, we have to start the [14:04.960 --> 14:10.640] classification. So to do so, we just invoke the start continuous classification method and we [14:10.640 --> 14:16.720] change the text to stop because we will be changing the the button interactions. And that's the quick [14:16.720 --> 14:24.160] overview of the code that is already available in the in the app. So how does the image classification [14:24.160 --> 14:29.600] work in MIT Preventor? Well, we have this big block that is the when personal image classifier [14:30.320 --> 14:37.120] has received a classification succeeded. We will receive this result variable. This result variable [14:37.120 --> 14:43.120] is a dictionary that to just give a little high level overview of what is a dictionary. It's a [14:43.120 --> 14:49.920] key value list of elements. So if we have two different groups, me and not me, we will receive [14:49.920 --> 14:54.800] me equals a specific value, not me equals a specific value. If we have three groups, we have [14:54.800 --> 15:00.880] one, two, and three that they each equal to a specific values. This is a little example of how [15:00.880 --> 15:05.840] it looks like. So we have key father equals this value, key model equals this value, then equals [15:05.840 --> 15:11.920] this value, etc. For the image classifier, a specific example, we will have something like [15:11.920 --> 15:16.960] this. We have me with this value and not me with this other specific value. [15:20.400 --> 15:26.320] So how we can retrieve a value in the dictionaries area, in the dictionaries block area, we have [15:26.320 --> 15:31.520] get value for a specific key. And we will be doing something similar to this. So we have the [15:31.520 --> 15:36.960] original dictionary here. We are building it in this area, make a dictionary me and not me. And [15:36.960 --> 15:42.240] we will be getting the value of the group that we want to use right now. In this case, it's the [15:42.240 --> 15:47.520] me example. If we want to take the not me, we just have to change this label to not me. And if [15:47.520 --> 15:53.680] we are using the wrong model because the groups are not the same, we just return a zero because [15:53.680 --> 16:02.240] we cannot classify that group. So let's get into it. By default, the tutorial will provide this [16:02.240 --> 16:07.200] block that is some variable, some me confidence level, and we have to complete them using this [16:07.200 --> 16:13.680] block. So to do so, we will take the get for key in dictionary block. We join it to the me [16:13.680 --> 16:22.720] confidence block. We remove, nice, get value for the key. And we will take from the text blocks [16:23.280 --> 16:29.920] an empty text block to touch it right here. And we can type me. So we can get the me group [16:29.920 --> 16:35.600] into the me confidence variable. The dictionary is the result. We can just attach it here. [16:36.480 --> 16:43.360] And if not found, we will just returning an empty zero value. And for the not me confidence, [16:43.360 --> 16:48.960] it's basically the same. So we can copy paste the blocks. We attach them to the not me confidence [16:48.960 --> 16:56.480] area. And we just have to prepend a note in front of the me. And now we have just defined [16:56.480 --> 17:01.840] that me confidence variable that can be accessed like that. We'll have the percentage of confidence [17:02.480 --> 17:07.600] that we are looking at the baby. And the not me confidence, it's the opposite. It's how [17:07.600 --> 17:14.160] confidence we are that we are not looking at the baby. The next step, the interesting variables. [17:15.120 --> 17:21.440] And now we just have to recap what do we have to do in the app. So in the app, we have to first [17:21.440 --> 17:28.240] update these labels here with the percentage. And we have to update these color bars with the [17:28.240 --> 17:36.160] correct confidence levels. We can do that by going to these components, to these two horizontal [17:36.160 --> 17:41.120] arrangements. And we have percentage one, bar graph one, percentage two and bar graph two. [17:42.000 --> 17:45.920] Percentage one, we can update the text to the percentage that we are showing. [17:45.920 --> 17:54.800] One second. There it is. So the value that we return from the dictionary goes from zero to one, [17:54.800 --> 18:00.400] but we want to return a percentage which goes from zero to 100. So we will take this me confidence [18:00.400 --> 18:08.640] value and we will multiply it by 100. So we can get the zero to 100 range. We just join it right [18:08.640 --> 18:19.360] here with the math number and we multiply it by 100, 100. But we will be missing the percentage [18:19.360 --> 18:25.280] sign. To get the percentage sign, we can use the text blocks with the join block. So we can join two [18:26.560 --> 18:35.200] text together and we can just create a new percentage symbol like this using the percentage [18:35.200 --> 18:42.160] symbol. And this is for the percentage labels. For the bar graph labels, we will be pledging with [18:42.160 --> 18:49.040] the length of the actual graph, bar graph. To do so, we have the width percent block that can [18:49.040 --> 18:54.160] modify the width according to a percentage. And we already have defined the percentage right here, [18:54.160 --> 19:00.080] so we can just copy paste these blocks and attach them to the width percent. And this is for the [19:00.080 --> 19:07.360] me group. For the not me group, we can copy paste the percentage one, which changes to percentage [19:07.360 --> 19:13.920] two, and we change the me confidence value to the not me confidence value. And for the bar graph, [19:13.920 --> 19:22.400] it's going to be bar graph two right here. And me confidence changes to not me confidence. [19:23.520 --> 19:29.280] And with that, we already have all the sequence of events for the labels updates. [19:29.280 --> 19:37.440] We can just go to the next step and confirm that we have defined it correctly, which is the same [19:37.440 --> 19:47.200] result. The next step is the fancy image change that if we think that we are looking at the baby, [19:47.200 --> 19:52.560] we will show the happy face, otherwise we just show the crying face. We will be using the if [19:52.560 --> 20:00.320] then logic. So go to the control blocks and we just take the if then otherwise block. We append [20:00.320 --> 20:05.360] it here. And what will we do is we will be comparing the me confidence value to the not [20:05.360 --> 20:11.360] me confidence value to know if we are looking at the baby or not. We go to the math blocks, [20:11.360 --> 20:18.320] we pick this comparator block, attach them to the if statement, and we are going to be changing the [20:18.320 --> 20:23.600] comparison to higher or equal because we will not, we don't worry about the equal in this case, [20:23.600 --> 20:30.560] we just want the higher or equal. We take again the me confidence variable, we compare it here, [20:31.120 --> 20:38.160] and we take the not me confidence value and we compare it right here. Then we will be updating [20:38.160 --> 20:44.720] the background of the of the app, which is available in the screen one. We take the background color [20:44.720 --> 20:52.080] block, we attach it here, and the tutorial already provides the example colors. So I'm just going [20:52.080 --> 20:59.920] to be dragging this right below so I can have them more easily accessible right here. And I can [20:59.920 --> 21:05.200] just join it here. And for the baby images, we have two images available here, happy baby and [21:05.200 --> 21:10.240] sad baby. So if we think that we are looking at the baby, we show the happy baby. So we use the [21:10.240 --> 21:16.640] visible block. And if not, we just hide, sorry, if we think that we're looking at the baby, [21:16.640 --> 21:23.520] we hide the we hide the side baby face. We go to the logic blocks, we take the true so we can set [21:23.520 --> 21:29.440] to true to visible, we can set visible to true, and we set visible to false for the sad baby [21:29.440 --> 21:36.080] like that. And we just join it. For the case of me confidence being higher than not me confidence, [21:36.080 --> 21:40.560] for the opposite case, when we are not looking at the baby, we just change the background color to [21:42.160 --> 21:52.880] this pink color. We hide the happy baby face like that. And we show the sad baby face [21:53.920 --> 22:03.920] just like this. And now the app is finished. So here we can just check the final code, [22:03.920 --> 22:08.400] which is exactly the same as we have right here. There are other possibilities like we can just [22:08.400 --> 22:14.640] implement a classifier using a different person. But to show how this works, we can use the MIT [22:14.640 --> 22:21.360] company map that is available on the Play Store. Let me just show my phone again. Here it is. So [22:21.360 --> 22:27.120] you can just go to the Play Store and go to MIT App Inventor, search MIT App Inventor and you have [22:27.120 --> 22:33.840] right here the company. You can open it. Yeah, you can just ignore this warning. It works without [22:33.840 --> 22:43.600] Wi-Fi. Continue without Wi-Fi. And over here you can connect the AI companion. And now I can scan [22:43.600 --> 22:54.720] the QR code like this. I'm sorry, it just disappeared. It takes a few seconds to connect. [22:54.720 --> 22:56.320] Let's see if it was faster than tonight. [22:56.320 --> 23:03.680] Now it's loading the extension, the personal classifier extension into my phone. [23:04.800 --> 23:08.560] Like this works with Wi-Fi, mobile network. It doesn't have to be connected. It's just connected [23:08.560 --> 23:14.080] because I'm just mirroring the screen through that cable. And I see here the layout of the app. [23:14.080 --> 23:19.120] I can see that it shows ready. So I can just toggle the camera to be the from one. [23:20.000 --> 23:25.520] And I can look at the camera and I'm just going to be start. And there is a higher confidence [23:25.520 --> 23:29.600] that I'm looking at the Wi-Fi. Just put a hand in front of it. It's just crying. [23:31.200 --> 23:37.920] And yeah, that's it. Later, if you want to build any other apps, you can export it [23:38.720 --> 23:44.320] to APK files. So you can start it on your phone or 200 app bundles if you want to distribute it [23:44.320 --> 23:49.040] through Play Store. But yeah, this is just a very high-level introduction to artificial [23:49.040 --> 23:54.320] intelligence in Inventor. You can just build any kind of classifier, for example, to classify trees, [23:54.320 --> 23:59.360] flowers, to classify even people. For example, for a faculty, if you want to build an app that [23:59.360 --> 24:06.720] recognizes people in your class as a game, you can just use Inventor and build any kind of app. [24:07.360 --> 24:10.880] Thank you so much and hope that it was useful for everybody. [24:10.880 --> 24:21.680] Thank you. Any questions? [24:24.320 --> 24:26.960] Do you mentor a technobation team? No. [24:28.960 --> 24:34.400] Sorry, I'm a software engineer. I just contributed to Inventor. I started as building apps and then [24:34.400 --> 24:40.720] I transitioned to open source. I participated in Google Summer of Code, like this option to export [24:40.720 --> 24:46.080] a 100 app bundles was my project in 2020 for Google Summer of Code. But yeah, I'm more like, [24:46.080 --> 24:49.680] more technical than actually teaching to kids. [24:52.400 --> 25:00.080] Any other questions? What's your experience with the relation between the number of pictures you [25:00.080 --> 25:06.480] have to submit to your classifier and your accuracy? That's a very good question. So for the linear [25:06.480 --> 25:14.080] example, he was asking like, what's the experience with the amount of images that we are going to [25:14.080 --> 25:19.280] be using for the classifier? So I haven't really tested it right now. But we have seen that if we [25:19.280 --> 25:24.000] go higher than 10 images for each class, for each group of images, we'll have really good results. [25:24.000 --> 25:28.320] In this case, because I was just turning very fast and using just a few number of images, [25:28.320 --> 25:34.080] you can see that the confidence levels were a little bit like 80, 20. But if we provide more [25:34.080 --> 25:39.680] than 10 images for each class, we should be able to get around over 90, 95% of confidence for each [25:39.680 --> 25:46.240] number. I'm not sure if there are any questions from the chat. Let me just check. [25:46.240 --> 26:00.880] What do you capture? Is that just for your face or did the learning, what was learned? [26:05.920 --> 26:09.440] It recognizes, it depends on what you are training, because in this case, we are just [26:09.440 --> 26:15.520] providing a very specific gestures. It's training my face like any face looking at the camera, [26:15.520 --> 26:23.040] or a hand in front of the face. By default, the model that is available, that is here, [26:23.760 --> 26:29.920] this model is turned by Salim. It's the example guy that is at the beginning of the tutorial. [26:29.920 --> 26:34.480] And I just tested it last night, and it worked with me because it recognizes the gestures, [26:34.480 --> 26:41.520] not the faces. If instead we train recognizing people, we will all be looking in the same way [26:41.520 --> 26:49.920] at the camera. So it will just go for specific facial, how do you say, facial features. [27:01.360 --> 27:07.760] In this case, in this case, it will work. You can just try if you want. We can try. [27:07.760 --> 27:11.440] Yeah, it should work. [27:15.200 --> 27:19.120] Can you, it's going to be a little bit tough, but Mark, can you, [27:21.520 --> 27:27.600] can you just try with Mark, for example? Toggle camera. Toggle camera. [27:29.040 --> 27:36.000] Yeah, and just try with, and start. Press start. It's a happy face, so if you put a hand in front, [27:36.000 --> 27:38.400] it's a sad face. It's recognizing the gestures. [27:41.760 --> 27:45.280] So can you also train it to recognize specific people? [27:45.280 --> 27:49.360] Yeah, it can be trained, but in this case, because the higher difference was the hand, [27:49.360 --> 27:53.520] it's just looking for the hand in model. But if you don't show the hand, it will look for faces. [27:53.520 --> 28:06.000] Yeah, it's a, it can be a fit because it's just, you can just use this website and [28:06.000 --> 28:10.000] fill any kind of models. The only restriction is that it has to be an MLD file, [28:10.000 --> 28:15.760] but yeah, it can classify any model basically. No problem. Any other questions? [28:15.760 --> 28:21.760] Well, I think we can leave it here. Thank you so much.