24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

基本思想:使用PYNQ-Z2开发板,调用USB摄像头,进行目标识别和检测

一、 首先进入官网,下载镜像文件,官网地址PYNQ – Python productivity for Zynq – Board 下载pynq-z2的镜像文件,可以使用motirx下载或者使用axel工具下载

https://dpoauwgwqsy2x.cloudfront.net/Download/pynq_z2_v2.5.0.zip

在下载镜像文件之后,在使用motrix下载win32Dsik工具,直接贴入地址即可下载

https://jaist.dl.sourceforge.net/project/win32diskimager/Archive/win32diskimager-1.0.0-install.exe

然后使用该工具写入sd卡

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

二、(1)整个开发板的开关和调冒设置如下图所示。

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

(2) 然后usb供电,插入网线,插入usb摄像头(这里要注意两个问题,免驱动的usb摄像头,由于原来的开发板使用usb供电,所以使用usb摄像头可能电源带不动,需要使用usb集线器,一端插入usb电源,一端插入usb摄像头,然后再用另一个usb集线器接口插入开发板的usb HOST接口)

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

(3)测试一下开发板demo,网页输入 http://pynq:9090/ 密码 xilinx

测试一下http://pynq:9090/notebooks/base/video/opencv_face_detect_webcam.ipynb ​​​​​​

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别测试可用

三、写自己的代码,完成这种usb摄像头和图像输出,加速研究,有空在写,关键需要使用vivado 工具进行开发,新笔记本window11 系统 整个磁盘空间太小了,那玩意太大了,等安装完成之后(64G)在搞~

(1)安装xilinx官方的demo

root@pynq:/home/xilinx# sudo pip3 install git+https://github.com/Xilinx/QNN-MO-PYNQ.git
Collecting git+https://github.com/Xilinx/QNN-MO-PYNQ.git
  Cloning https://github.com/Xilinx/QNN-MO-PYNQ.git to /tmp/pip-iixz8qk_-build
Installing collected packages: qnn-loopback
  Running setup.py install for qnn-loopback ... done
Successfully installed qnn-loopback-0.1

测试一下这个例子 http://pynq:9090/notebooks/qnn/tiny-yolo-image.ipynb 是没太大问题,车识别有点小问题

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

然后整合代码吧,改成从摄像头读取帧,然后使用yolov进行检测,然后将帧送到HDMI进行试试显示。

如果要进行文件传输的话,可以使用winSCP+ip 用户名 xilinx 密码 xilinx 访问文件系统

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

也可以使用 win+R 输入打开文件管理器,输入smb: //pynq/xilinx进行挂载.注意,用户名和密码均为xilinx。

四、测试代码

(1)测试摄像头和HDMI实时帧传递

from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
print( "base.bit   ")
base = BaseOverlay("base.bit")
monitor configuration: 640*480 @ 60Hz
Mode = VideoMode(640,480,24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode,PIXEL_BGR)
hdmi_out.start()
monitor (output) frame buffer size
frame_out_w = 1920
frame_out_h = 1080
camera (input) configuration
frame_in_w = 640
frame_in_h = 480
initialize camera from OpenCV
import cv2

videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);

print("Capture device is open: " + str(videoIn.isOpened()))
Capture webcam image
import numpy as np

while True:
    ret, frame_vga = videoIn.read()
    # Display webcam image via HDMI Out
    if (ret):
        outframe = hdmi_out.newframe()
        outframe[0:480,0:640,:] = frame_vga[0:480,0:640,:]
        hdmi_out.writeframe(outframe)
    else:
        raise RuntimeError("Failed to read from camera.")

测试结果,视频很流畅

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

(2)实时检测目标代码,因为官方提供的yolov3例子使用vivado生成bit文件电路搭建的PL网络层,而HDMI也是bit进行PL电路生成的,将RGB图像映射到HDMI接口,对于一块PYNQ开发板,无法同时加载两个bit文件,只能自己搭建YOLO3+HDMI接口电路。因此暂无法同时测试yolov3+HDMi实时检测输出,只能使用单独yolov3检测显示在jupter端或者使用纯PS逻辑操作 opencv检测+HMDMI输出的方式;下面给出一个结合代码,视频流是正常播放,但是输出是无检测信息输出

import sys
import os, platform
import json
import numpy as np
import cv2
import ctypes
from matplotlib import pyplot as plt
from PIL import Image
from datetime import datetime
from qnn import TinierYolo
from qnn import utils
sys.path.append("/opt/darknet/python/")
from darknet import *
#%matplotlib inline
import IPython
import cv2
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *

classifier = TinierYolo()
classifier.init_accelerator()
net = classifier.load_network(json_layer="/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-layers.json")

conv0_weights = np.load('/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-conv0-W.npy', encoding="latin1", allow_pickle=True)
conv0_weights_correct = np.transpose(conv0_weights, axes=(3, 2, 1, 0))
conv8_weights = np.load('/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-conv8-W.npy', encoding="latin1", allow_pickle=True)
conv8_weights_correct = np.transpose(conv8_weights, axes=(3, 2, 1, 0))
conv0_bias = np.load('/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-conv0-bias.npy', encoding="latin1", allow_pickle=True)
conv0_bias_broadcast = np.broadcast_to(conv0_bias[:,np.newaxis], (net['conv1']['input'][0],net['conv1']['input'][1]*net['conv1']['input'][1]))
conv8_bias = np.load('/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-conv8-bias.npy', encoding="latin1", allow_pickle=True)
conv8_bias_broadcast = np.broadcast_to(conv8_bias[:,np.newaxis], (125,13*13))

file_name_cfg = c_char_p("/usr/local/lib/python3.6/dist-packages/qnn/params/tinier-yolo-bwn-3bit-relu-nomaxpool.cfg".encode())

net_darknet = lib.parse_network_cfg(file_name_cfg)

out_dim = net['conv7']['output'][1]
out_ch = net['conv7']['output'][0]
img_folder = '/home/xilinx/jupyter_notebooks/qnn/yoloimages'
file_name_out = c_char_p("/home/xilinx/jupyter_notebooks/qnn/detection".encode())
file_name_probs = c_char_p("/home/xilinx/jupyter_notebooks/qnn/probabilities.txt".encode())
file_names_voc = c_char_p("/opt/darknet/data/voc.names".encode())
tresh = c_float(0.3)
tresh_hier = c_float(0.5)
darknet_path = c_char_p("/opt/darknet/".encode())

conv_output = classifier.get_accel_buffer(out_ch, out_dim)

base = BaseOverlay("base.bit")
monitor configuration: 640*480 @ 60Hz
Mode = VideoMode(640,480,24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode,PIXEL_BGR)
hdmi_out.start()

while(1):
    for image_name in os.listdir(img_folder):
        img_file = os.path.join(img_folder, image_name)
        file_name = c_char_p(img_file.encode())

        img = load_image(file_name,0,0)
        img_letterbox = letterbox_image(img,416,416)
        img_copy = np.copy(np.ctypeslib.as_array(img_letterbox.data, (3,416,416)))
        img_copy = np.swapaxes(img_copy, 0,2)
        free_image(img)
        free_image(img_letterbox)

        #First convolution layer in sw
        if len(img_copy.shape)<4: 1 img_copy="img_copy[np.newaxis," :, :] conv0_ouput="utils.conv_layer(img_copy,conv0_weights_correct,b=conv0_bias_broadcast,stride=2,padding=1)" conv0_output_quant="conv0_ouput.clip(0.0,4.0)" #offload to hardware conv_input="classifier.prepare_buffer(conv0_output_quant*7);" classifier.inference(conv_input, conv_output) conv7_out="classifier.postprocess_buffer(conv_output)" #last convolution layer in sw 0, 1) # exp if len(conv7_out.shape)<4: conv8_output="utils.conv_layer(conv7_out,conv8_weights_correct,b=conv8_bias_broadcast,stride=1)" conv8_out="conv8_output.ctypes.data_as(ctypes.POINTER(ctypes.c_float))" #draw detection boxes lib.forward_region_layer_pointer_nolayer(net_darknet,conv8_out) lib.draw_detection_python(net_darknet, file_name, tresh, tresh_hier,file_names_voc, darknet_path, file_name_out, file_name_probs); #display result ipython.display.clear_output(1) file_content="open(file_name_probs.value,"r").read().splitlines()" detections="[]" for line file_content[0:]: name, probability="line.split(":" ") detections.append((probability, name)) det sorted(detections, key="lambda" tup: tup[0], reverse="True):" print("class: {}\tprobability: {}".format(det[1], det[0])) print("-----") res="Image.open(file_name_out.value.decode()" + ".png") frame_vga="cv2.cvtColor(np.asarray(res),cv2.COLOR_RGB2BGR)" (640,480)) outframe="hdmi_out.newframe()" outframe[:]="frame_vga" hdmi_out.writeframe(outframe) #res="Image.open(file_name_out.value.decode()" #display(res) < code></4:>

测试结果

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

帧是在实时处理过程中,但是无检测结果输出到HDMI接口上,还是HDMI 的pl和yolov3 的pl是独立存在的问题。

(3)、测试opencv吧,等我把vivado 在下载下来好好搞搞,最近事情比较多

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

测试代码,使用pycharm professional远程连接开发板测试就行

import cv2
from pynq.overlays.base import BaseOverlay
from pynq.lib.video import *
print("base.bit")
base = BaseOverlay("base.bit")
monitor configuration: 640*480 @ 60Hz
Mode = VideoMode(640, 480, 24)
hdmi_out = base.video.hdmi_out
hdmi_out.configure(Mode, PIXEL_BGR)
hdmi_out.start()

camera (input) configuration
frame_in_w = 640
frame_in_h = 480

videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("capture device is open: " + str(videoIn.isOpened()))
face_cascade = cv2.CascadeClassifier(
    '/home/xilinx/jupyter_notebooks/base/video/data/'
    'haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier(
    '/home/xilinx/jupyter_notebooks/base/video/data/'
    'haarcascade_eye.xml')

while True:
    ret, frame_vga = videoIn.read()
    if ret==0:
        break
    np_frame = frame_vga
    gray = cv2.cvtColor(frame_vga, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:
        cv2.rectangle(np_frame,(x,y),(x+w,y+h),(255,0,0),2)
        roi_gray = gray[y:y+h, x:x+w]
        roi_color = np_frame[y:y+h, x:x+w]

        eyes = eye_cascade.detectMultiScale(roi_gray)
        for (ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

    if (ret):
        outframe = hdmi_out.newframe()
        outframe[:] = np_frame
        hdmi_out.writeframe(outframe)
    else:
        raise RuntimeError("Error while reading from camera.")

测试视频流

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

4、使用PYNQ-Z2开发板做HDMI输入和HDMI输出的实验测试,代码就不放了,是里面的demo

原原屏幕界面

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

处理滤波之后的界面

24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

Original: https://blog.csdn.net/sxj731533730/article/details/123770711
Author: 天昼AI实验室
Title: 24、window11下,使用PYNQ-Z2开发板进行目标检测和识别

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/629526/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球