# Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

[En]

I also came into contact with K210 because of a competition, which required target detection and garbage sorting. Before I came into contact with K210, I had been using various YOLO versions of detection and deployed to raspberry pie and nano. By chance, I found K210. I would like to thank one of my seniors who took me to start learning K210 and transferred me from the blind command line to Maxipy to use Python to learn K210. I wrote this blog to record my study and life, and to thank the upperclassman who helped me.

# 前言

[En]

With the continuous development of artificial intelligence, machine learning technology is becoming more and more important. Many people have turned on learning machine learning. K210 is a good tool. K210 is a MCU launched last year by a former mining chip company called Jiannan. Its feature is that the chip architecture includes a self-developed neural network hardware accelerator KPU, which can perform high-performance convolution neural network operations. But don’t think that the performance of MCU is not as good as that of high-end SoC, at least in terms of AI computing, the computing power of K210 is actually quite considerable. According to the official description of Kanan, the KPU of K210 is 0.8TOPS. For comparison, Nvidia Jetson Nano, which has 128 CUDA units GPU, has 0.47TFLOPS, while the latest raspberry pie 4 has less than 0.1TFLOPS. So K210 is a rare good helper for you to carry out in-depth learning, and then let’s move on to the text.

# 一、所需环境

## 1.硬件环境

K210开发板

[En]

All of the above series are fine. I have basically used all Sipeed K210 development boards, and they are basically more or less the same.

## ; 2.软件环境

[En]

For software, I recommend Mx-yolov3 and Maixpy here.

Mx-yolov3
Maixpy：大家可自行从官网上下载

# 二、Mx-yolov3

Mx-yolov3是一位大佬写的软件，它需要制定的环境，接下来我会带大家从了解到熟练使用Mx-yolov3

## 1.软件环境配置

### ; 1.python环境

[En]

After downloading the software, open the folder and you will see the Mx-yolov3 configuration file. Open it directly. The required python environment must be python3.7.4. If your computer has installed another version of python, please uninstall it first and then install it. Or I’ll just tell you the wrong thing.

### 2.相关依赖包

python3.7.4环境安装好后，安装Python依赖库和预训练权重，直接点安装就可以，Python环境没有问题安装相应的依赖也是没有问题的，如果在这里报错了，你需要去检查你的环境是不是有问题。

### 3.GPU训练配置

[En]

The third step is to choose the installation. Here I can introduce that not following the third step will not have any effect on your next training test. Cuda and cudun are installed in order to use GPU training. If not installed, the computer will use CPU training.

[En]

The installation steps are simple, and there are tutorials in the folder. I won’t repeat them here. (it is suggested that we should install the work that GPU does. Why let CPU do it, and the training speed will be much faster.)

### 4.总结

1、先安装Python3.7.4 版本，不支持Python3.8版本（因为许多依赖库不支持3.8版本的Python）必须安装在默认路径，否则修改kreas网络将失败。

2、安装依赖库、拷贝权重文件并修改Kreas网络，在安装依赖库的过程中若收到”HTTP”错误，请更换网络环境重试。

3、若需要使用GPU训练，则需要安装CUDA_10.0和Cudnn_7.6.4，安装成功后将自动启用GPU进行训练；安装的网络教程在文件夹内。

## 2.开始使用

[En]

After the environment is configured, you can open the Mx-yolov3 software.

### ; 1.模型要求

[En]

When using Mx-yolov3 for local training, be sure to convert your pictures into pictures with a size of 224mm / 224, otherwise there will be problems of inaccurate identification (here to remind you not to take pictures with iPhone, be sure to use Android phone) personally step on the hole!

[En]

After the picture is converted, the annotation tool can be marked with vott, and the use of the annotation tool can be seen in my another blog, which introduces the use of the annotation tool and the download link of Baidu network disk.

[En]

Use vott annotations to export vocc form, which is also introduced in that blog post.

### 2.开始训练

[En]

Select the address of the picture you need to train and the tag address you need to label. Here, I will use the routine mask to train and demonstrate to you. (open the Mx-yolov3 software, after a simple load, we will see its main interface and re-select the training picture address and training tag address. Both folders are in the home directory “datasets/yolo/masks” path. The img file contains all the training images, while the xml folder contains all the tagged files. )

[En]

First of all, select the address of the model you are training, as shown in the figure above

[En]

Next, select the category name.

[En]

Then you need to calculate the anchor points of your training model. The method is very simple, just click calculate.

[En]

Next you will see the following picture

[En]

Then click to start the training, you can train, no accident, you can see the following picture content

[En]

After waiting for the iteration to be completed, the information bar and terminal information will display the end of the training at the same time.

### ; 3.模型测试

[En]

After training, you can find the two model files yolov2.h5 and yolov2.tflite that you have just trained in the Model Files folder in the home directory.

[En]

Next, click the “Test Model” button, select the yolov2.h5 model file, wait a few minutes, Mx-yolov2 will display the test results.

### ; 5.部署到K210上

[En]

First of all, you need to burn the corresponding firmware, click the tool set, and you will see the interface below.

[En]

Click

[En]

Icon

[En]

You will see the following interface

[En]

Connect to your development board, select firmware burning in the folder, and use 0x300000 when burning

[En]

After burning, the converted yolov2.kmodel model files will also be burned to your K210. Now that your deployment is complete, you need to write a python script to run.

# 三、脚本运行

## 1.软件

Maixpy，使用Maixpy时要选择对应的开发板

## ; 2.代码部分

[En]

Attach the script code here

import sensor
import image
import lcd
import KPU as kpu

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((224, 224))
sensor.set_hmirror(0)
sensor.run(1)
task = kpu.load(0x300000) #使用kfpkg将 kmodel 与 maixpy 固件打包下载到 flash
anchor = (0.9, 1.08, 1.65, 2.03, 2.49, 3.22, 3.28, 4.29, 4.37, 5.5) #通过K-means聚类算法计算
a = kpu.init_yolo2(task, 0.6, 0.3, 5, anchor)
while(True):
img = sensor.snapshot()
try:
except:
print("TypeError")
#a=img.draw_rectangle([0,0,360,20],0xFFFF,1,1)
if code:
for i in code:
a=img.draw_rectangle(i.rect(),(255, 255, 255),1,0)
a=lcd.display(img)
# = img.draw_string(0,0, classes[i.classid()], color=(0,255,0), scale=2)
for i in code:
lcd.draw_string(i.x()+40,i.y()-30,classes[i.classid()] , lcd.RED,  lcd.WHITE)
lcd.draw_string(i.x()+40,i.y()-10,str(round((i.value()*100),2))+"%", lcd.RED,  lcd.WHITE)
else:
a = lcd.display(img)


[En]

Just connect and run.

# 四、脱机运行

## 1.步骤

[En]

Copy the converted yolov2.kmodel model file, the anchor.txt file in the model file folder, the label.txt file and the object classification script boot.py in the program file to the SD card.

## 2.boot文件代码

import sensor
import image
import lcd
import KPU as kpu

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((224, 224))
sensor.set_hmirror(0)
sensor.run(1)
f=open("anchors.txt","r")
L=[]
for i in anchor_txt.split(","):
L.append(float(i))
anchor=tuple(L)
f.close()
a = kpu.init_yolo2(task, 0.6, 0.3, 5, anchor)
f=open("labels.txt","r")
labels = labels_txt.split(",")
f.close()
while(True):
img = sensor.snapshot()
code = kpu.run_yolo2(task, img)
if code:
for i in code:
a=img.draw_rectangle(i.rect(),(0,255,0),2)
a = lcd.display(img)
for i in code:
lcd.draw_string(i.x()+45, i.y()-5, labels[i.classid()]+" "+'%.2f'%i.value(), lcd.WHITE,lcd.GREEN)
else:
a = lcd.display(img)



[En]

Then turn it on and run it.

# 总结

Original: https://blog.csdn.net/qq_51963216/article/details/121044449
Author: 我与nano
Title: Mx-yolov3+Maixpy+ K210进行本地模型训练和目标检测

(0)