Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

我也是因为一场比赛而接触到K210的,这需要目标检测和垃圾分类。在接触K210之前,我一直在使用各种YOLO版本的检测,并部署到树莓派和Nano上。一个偶然的机会,我找到了K210。我要感谢我的一位前辈,他带我开始学习K210,并将我从盲目的命令行转移到Maxipy,使用Python学习K210。我写这个博客是为了记录我的学习和生活,也是为了感谢帮助我的学长。

[En]

I also came into contact with K210 because of a competition, which required target detection and garbage sorting. Before I came into contact with K210, I had been using various YOLO versions of detection and deployed to raspberry pie and nano. By chance, I found K210. I would like to thank one of my seniors who took me to start learning K210 and transferred me from the blind command line to Maxipy to use Python to learn K210. I wrote this blog to record my study and life, and to thank the upperclassman who helped me.

文章目录

前言

随着人工智能的不断发展,机器学习技术变得越来越重要。许多人已经开始学习机器学习。K210是一个很好的工具。K210是一家名为建南的前矿业芯片公司去年推出的MCU。其特点是芯片架构包括自主研发的神经网络硬件加速器KPU,可以进行高性能的卷积神经网络运算。但不要认为MCU的性能不如高端SoC,至少在AI计算方面,K210的计算能力其实是相当可观的。根据Kanan的官方描述,K210的KPU为0.8TOPS。相比之下,拥有128个CUDA处理器的NVIDIA Jetson Nano的TFLOPS为0.47TFLOPS,而最新的覆盆子派4的TFLOPS不到0.1TFLOPS。所以K210是难得的好帮手,让你进行深入学习,然后让我们继续文字。

[En]

With the continuous development of artificial intelligence, machine learning technology is becoming more and more important. Many people have turned on learning machine learning. K210 is a good tool. K210 is a MCU launched last year by a former mining chip company called Jiannan. Its feature is that the chip architecture includes a self-developed neural network hardware accelerator KPU, which can perform high-performance convolution neural network operations. But don’t think that the performance of MCU is not as good as that of high-end SoC, at least in terms of AI computing, the computing power of K210 is actually quite considerable. According to the official description of Kanan, the KPU of K210 is 0.8TOPS. For comparison, Nvidia Jetson Nano, which has 128 CUDA units GPU, has 0.47TFLOPS, while the latest raspberry pie 4 has less than 0.1TFLOPS. So K210 is a rare good helper for you to carry out in-depth learning, and then let’s move on to the text.

一、所需环境

1.硬件环境

K210开发板

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
以上所有系列都很好。我用过的基本上都是Sipeed K210开发板,大同小异。
[En]

All of the above series are fine. I have basically used all Sipeed K210 development boards, and they are basically more or less the same.

; 2.软件环境

至于软件,我推荐Mx-yolov3和Maixpy。

[En]

For software, I recommend Mx-yolov3 and Maixpy here.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
Mx-yolov3
Maixpy:大家可自行从官网上下载

二、Mx-yolov3

Mx-yolov3是一位大佬写的软件,它需要制定的环境,接下来我会带大家从了解到熟练使用Mx-yolov3

1.软件环境配置

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

; 1.python环境

下载软件后,打开文件夹,您将看到Mx-yolov3配置文件。直接打开它。所需的python环境必须是python3.7.4。如果您的计算机已经安装了其他版本的Python,请先卸载它,然后再安装它。否则我就告诉你错误的事情。

[En]

After downloading the software, open the folder and you will see the Mx-yolov3 configuration file. Open it directly. The required python environment must be python3.7.4. If your computer has installed another version of python, please uninstall it first and then install it. Or I’ll just tell you the wrong thing.

2.相关依赖包

python3.7.4环境安装好后,安装Python依赖库和预训练权重,直接点安装就可以,Python环境没有问题安装相应的依赖也是没有问题的,如果在这里报错了,你需要去检查你的环境是不是有问题。

3.GPU训练配置

第三步是选择安装。我可以在这里介绍,不遵循第三步不会对你的下一次训练测试产生任何影响。安装CUDA和CUDUN是为了使用GPU培训。如果未安装,计算机将使用CPU培训。

[En]

The third step is to choose the installation. Here I can introduce that not following the third step will not have any effect on your next training test. Cuda and cudun are installed in order to use GPU training. If not installed, the computer will use CPU training.

安装步骤很简单,文件夹中有教程。我不会在这里重复它们。(建议我们安装GPU所做的工作。为什么让CPU来做,训练速度会快很多。)

[En]

The installation steps are simple, and there are tutorials in the folder. I won’t repeat them here. (it is suggested that we should install the work that GPU does. Why let CPU do it, and the training speed will be much faster.)

4.总结

1、先安装Python3.7.4 版本,不支持Python3.8版本(因为许多依赖库不支持3.8版本的Python)必须安装在默认路径,否则修改kreas网络将失败。

2、安装依赖库、拷贝权重文件并修改Kreas网络,在安装依赖库的过程中若收到”HTTP”错误,请更换网络环境重试。

3、若需要使用GPU训练,则需要安装CUDA_10.0和Cudnn_7.6.4,安装成功后将自动启用GPU进行训练;安装的网络教程在文件夹内。

2.开始使用

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
环境配置完成后,即可打开Mx-yolov3软件。
[En]

After the environment is configured, you can open the Mx-yolov3 software.

; 1.模型要求

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
在使用Mx-yolov3进行本地培训时,一定要将图片转换成224 mm/224大小的图片,否则会出现识别不准的问题(这里提醒大家不要用iPhone拍照,一定要用安卓手机)亲自踩洞!
[En]

When using Mx-yolov3 for local training, be sure to convert your pictures into pictures with a size of 224mm / 224, otherwise there will be problems of inaccurate identification (here to remind you not to take pictures with iPhone, be sure to use Android phone) personally step on the hole!

图片转换后,注释工具可以用VOTT进行标记,在我的另一个博客中可以看到注释工具的使用,其中介绍了注释工具的使用和百度网盘的下载链接。

[En]

After the picture is converted, the annotation tool can be marked with vott, and the use of the annotation tool can be seen in my another blog, which introduces the use of the annotation tool and the download link of Baidu network disk.

使用vott注解来导出vocc表单,这在那篇博客文章中也有介绍。

[En]

Use vott annotations to export vocc form, which is also introduced in that blog post.

以上使用软件在Image_tool文件夹下

2.开始训练

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
选择需要训练的图片地址和需要标记的标签地址。在这里,我将用常规的面具向大家进行训练和示范。(打开Mx-yolov3软件,简单加载后,我们会看到它的主界面,并重新选择训练图片地址和训练标签地址。这两个文件夹都位于主目录“DataSets/yolo/mats”路径中。IMG文件包含所有训练图像,而XML文件夹包含所有标记文件。)
[En]

Select the address of the picture you need to train and the tag address you need to label. Here, I will use the routine mask to train and demonstrate to you. (open the Mx-yolov3 software, after a simple load, we will see its main interface and re-select the training picture address and training tag address. Both folders are in the home directory “datasets/yolo/masks” path. The img file contains all the training images, while the xml folder contains all the tagged files. )

首先,选择您要培训的模特的地址,如上图

[En]

First of all, select the address of the model you are training, as shown in the figure above

接下来,选择类别名称。

[En]

Next, select the category name.

然后,您需要计算您的训练模型的锚点。方法很简单,只需点击计算即可。

[En]

Then you need to calculate the anchor points of your training model. The method is very simple, just click calculate.

接下来,您将看到下面的图片

[En]

Next you will see the following picture

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
然后点击开始训练,你就可以训练了,没有意外,你可以看到下面的图片内容
[En]

Then click to start the training, you can train, no accident, you can see the following picture content

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
在等待迭代完成后,信息栏和终端信息将同时显示训练结束。
[En]

After waiting for the iteration to be completed, the information bar and terminal information will display the end of the training at the same time.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

; 3.模型测试

培训结束后,您可以在主目录的Model Files文件夹中找到刚刚培训过的两个模型文件yolov2.h5和yolov2.tflite。

[En]

After training, you can find the two model files yolov2.h5 and yolov2.tflite that you have just trained in the Model Files folder in the home directory.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
接下来,点击“测试模型”按钮,选择yolov2.h5模型文件,等待几分钟,Mx-yolov2将显示测试结果。
[En]

Next, click the “Test Model” button, select the yolov2.h5 model file, wait a few minutes, Mx-yolov2 will display the test results.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

4.转换模型

点击模型转换按钮,打开NNcase0.1_GUI版本,选择刚才训练出来的yolov2.tflite模型文件,保存地址,量化图片地址(训练图片4~5张),点击开始转换,等待一会,将看到如下信息。

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

; 5.部署到K210上

首先,您需要刻录相应的固件,点击工具集,就会看到下面的界面。

[En]

First of all, you need to burn the corresponding firmware, click the tool set, and you will see the interface below.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
单击
[En]

Click

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

图标

[En]

Icon

您将看到以下界面

[En]

You will see the following interface

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
连接到您的开发板,在文件夹中选择固件刻录,刻录时使用0x300000
[En]

Connect to your development board, select firmware burning in the folder, and use 0x300000 when burning

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测
刻录后,转换后的yolov2.kModel文件也会刻录到您的K210上。现在您的部署已经完成,您需要编写一个python脚本来运行。
[En]

After burning, the converted yolov2.kmodel model files will also be burned to your K210. Now that your deployment is complete, you need to write a python script to run.

三、脚本运行

1.软件

Maixpy,使用Maixpy时要选择对应的开发板

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

; 2.代码部分

将脚本代码附加到此处

[En]

Attach the script code here

import sensor
import image
import lcd
import KPU as kpu

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((224, 224))
sensor.set_hmirror(0)
sensor.run(1)
task = kpu.load(0x300000) #使用kfpkg将 kmodel 与 maixpy 固件打包下载到 flash
anchor = (0.9, 1.08, 1.65, 2.03, 2.49, 3.22, 3.28, 4.29, 4.37, 5.5) #通过K-means聚类算法计算
a = kpu.init_yolo2(task, 0.6, 0.3, 5, anchor)
classes=["Masks","Un_Masks"] #标签名称要和你训练时的标签名称顺序相同
while(True):
    img = sensor.snapshot()
    try:
        code = kpu.run_yolo2(task,img)
    except:
        print("TypeError")
    #a=img.draw_rectangle([0,0,360,20],0xFFFF,1,1)
    if code:
        for i in code:
            a=img.draw_rectangle(i.rect(),(255, 255, 255),1,0)
            a=lcd.display(img)
            # = img.draw_string(0,0, classes[i.classid()], color=(0,255,0), scale=2)
            for i in code:
                lcd.draw_string(i.x()+40,i.y()-30,classes[i.classid()] , lcd.RED,  lcd.WHITE)
                lcd.draw_string(i.x()+40,i.y()-10,str(round((i.value()*100),2))+"%", lcd.RED,  lcd.WHITE)
    else:
        a = lcd.display(img)
a = kpu.deinit(task)

只需连接并运行即可。

[En]

Just connect and run.

四、脱机运行

1.步骤

将转换后的yolov2.kmodel模型文件、模型文件夹中的anchor.txt文件、程序文件中的label.txt文件和对象分类脚本boot.py复制到SD卡中。

[En]

Copy the converted yolov2.kmodel model file, the anchor.txt file in the model file folder, the label.txt file and the object classification script boot.py in the program file to the SD card.

2.boot文件代码

import sensor
import image
import lcd
import KPU as kpu

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((224, 224))
sensor.set_hmirror(0)
sensor.run(1)
task = kpu.load("/sd/yolov2.kmodel")
f=open("anchors.txt","r")
anchor_txt=f.read()
L=[]
for i in anchor_txt.split(","):
    L.append(float(i))
anchor=tuple(L)
f.close()
a = kpu.init_yolo2(task, 0.6, 0.3, 5, anchor)
f=open("labels.txt","r")
labels_txt=f.read()
labels = labels_txt.split(",")
f.close()
while(True):
    img = sensor.snapshot()
    code = kpu.run_yolo2(task, img)
    if code:
        for i in code:
            a=img.draw_rectangle(i.rect(),(0,255,0),2)
            a = lcd.display(img)
            for i in code:
                lcd.draw_string(i.x()+45, i.y()-5, labels[i.classid()]+" "+'%.2f'%i.value(), lcd.WHITE,lcd.GREEN)
    else:
        a = lcd.display(img)
a = kpu.deinit(task)

然后打开并运行它。

[En]

Then turn it on and run it.

Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测使用K210训练模型检测模型就可以顺利进行了。感谢大家看到了最后一行。

总结

这是我学习K210后写下的一篇博客,写下这篇博客的目的是为了记录自己的学习生活,同时让更多的人能够更快的了解到K210,后续我会对K210的一些具体使用写一篇博客。到此这篇博客就结束了,并再一次感谢那位带我一起学习的学长。在附上这位学长博客的链接哦(https://blog.csdn.net/hyayq8124)

Original: https://blog.csdn.net/qq_51963216/article/details/121044449
Author: 我与nano
Title: Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/6266/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

最近整理资源【免费获取】:   👉 程序员最新必读书单  | 👏 互联网各方向面试题下载 | ✌️计算机核心资源汇总