# 计算机视觉项目-银行卡卡号自动识别

😊😊😊 欢迎来到本博客😊😊😊

🎉 作者简介：⭐️⭐️⭐️ 目前计算机研究生在读。主要研究方向是人工智能和群智能算法方向。目前熟悉python网页爬虫、机器学习、计算机视觉（OpenCV）、群智能算法。然后正在学习深度学习的相关内容。以后可能会涉及到网络安全相关领域，毕竟这是每一个学习计算机的梦想嘛！计算机视觉更新完成之后我会更新相关于深度学习得相关知识。
📝 目前更新：🌟🌟🌟目前已经更新了关于网络爬虫的相关知识、机器学习的相关知识、算法小案例、目前正在更新计算机视觉-OpenCV的相关内容。

*
🐨前言
🐨项目说明
🐨步骤讲解
🐨全程操作讲解

## ; 🐨前言

[En]

In the above several issues, we have introduced all the basic processing of computer vision, and then we will continue to consolidate the previous basic knowledge with a few small projects here. In fact, the bank card number recognition and license plate recognition, express order number recognition and other projects done by our blog are all interlinked, so we have mastered the relevant knowledge of this class, which is equivalent to mastering the ideas of many projects! Let’s start with today’s project presentation.

## 🐨项目说明

[En]

First of all, we need a digital template, such as the following two, where the project is different, we need a different template, so what is the use of this template? That is to say, we have to use OpenCV to identify the numbers on the bank card, but how do I let the computer know these numbers? do I have to let him learn? after the computer passes the study, he knows which is 1 and which is 2. The usual templates have license plate templates.

[En]

We use the model we have learned to match the numbers on the credit card one by one, and we are going to do something like this, and then we ask the computer to give a score for each matching result, for example, we send in an image, he is 8, and he first matches with 1, giving 32 points, 2 giving 18 points, and 8 giving 88 points. At the highest, then our computer will assume that the number is 8. Then find the index on the corresponding template (as explained here, the model and appearance of the number of this template should be consistent with the target. ) here we need to use the resize operation to resize the extracted things.

## ; 🐨步骤讲解

1.使用模板匹配方式对模板，以及输入图像进行轮廓检测（检测外轮廓）。
2.得到当前轮廓的外接矩形。
3.将模板中的外接矩形切割出来。
4.使用矩形的长宽比之间的差异使得信用卡的数字矩形框能够被选择出来。
5.将其进一步细分，与需要识别的信用卡当中的外接矩形resize成同样的大小。
6.使用for循环依次检测。

## 🐨全程操作讲解

[En]

Then we go directly to the code. First, let’s take a look at the picture we want to manipulate.

[En]

First of all, we need to define a drawing function, which is convenient for later drawing operation. Otherwise, it will be troublesome to do the same operation every time.

def cv_show(name,img):
cv2.imshow(name,img)
cv2.waitKey(0)
cv2.destroyAllWindows()


[En]

Then we import third-party libraries and input parameter operations:

[En]

The type of bank card is defined here.

from imutils import contours
import numpy as np
import argparse
import cv2
import myutils

ap = argparse.ArgumentParser()
help="path to input image")
help="path to template OCR-A image")
args = vars(ap.parse_args())
FIRST_NUMBER = {
"3": "American Express",
"4": "Visa",
"5": "MasterCard",
"6": "Discover Card"
}


[En]

Here we explain that for operations in pycharm, be sure to specify the path to the input image.

[En]

Make a specification here, type-I and then specify that the input image path must be accurate to the image, such as 123.jpg. Then the picture of the blank-t template. If there is anything you don’t understand here, you can ask me privately.

-i C:\Users\jzdx\Desktop\OpenCV\xinyongka\template-matching-ocr\images\credit_card_01.png
-t C:\Users\jzdx\Desktop\OpenCV\xinyongka\template-matching-ocr\images\ocr_a_reference.png


[En]

There are more than 500 color space conversions here.

img = cv2.imread(args["template"])
cv_show('img',img)
ref = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv_show('ref',ref)
ref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)[1]
cv_show('ref',ref)


[En]

Here, we convert the grayscale of the template, and then the binary operation, and the corresponding binary result is:

refCnts, hierarchy = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img,refCnts,-1,(0,0,255),3)
cv_show('img',img)


[En]

The outline of the extracted template.

refCnts = myutils.sort_contours(refCnts, method="left-to-right")[0]


[En]

We sort the outline, so how to sort it, sort it using an Abscissa of the outline, here we jump directly into the myutils program.

import cv2

def sort_contours(cnts, method="left-to-right"):
reverse = False
i = 0
if method == "right-to-left" or method == "bottom-to-top":
reverse = True
if method == "top-to-bottom" or method == "bottom-to-top":
i = 1
boundingBoxes = [cv2.boundingRect(c) for c in cnts]
(cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes),
key=lambda b: b[1][i], reverse=reverse))
return cnts, boundingBoxes


[En]

Here is mainly to do an outline of the sorting operation, put 1 on the position of 1, there is no sorting is chaotic. Here for sorting this operation is a more commonly used operation, you friends should master.

for (i, c) in enumerate(refCnts):

(x, y, w, h) = cv2.boundingRect(c)
roi = ref[y:y + h, x:x + w]
roi = cv2.resize(roi, (57, 88))

digits[i] = roi
rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 3))
sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))


image = cv2.imread(args["image"])
cv_show('image',image)
image = myutils.resize(image, width=300)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv_show('gray',gray)
tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)
cv_show('tophat',tophat)
gradX = cv2.Sobel(tophat, ddepth=cv2.CV_32F, dx=1, dy=0,
ksize=-1)


[En]

Here, a preprocessing operation is performed on the incoming image, including * conversion to grayscale, politeness operation to highlight brighter areas * , and then a * Sobel operation. Then Sobel means to convolution an image with a specific convolution kernel.

[En]

One more closed operation. *

expand and then corrode! * * so what is the purpose? we want all the bank card numbers of these four pieces to be connected together, then find it and extract it. That’s what I want.
gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
cv_show('thresh',thresh)


[En]

The binary operation here is to automatically find an appropriate value as the threshold, rather than 0. 0.

thresh = cv2.threshold(gradX, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]


[En]

The key point of this code part is that he is obviously doing a threshold operation, that is to say, we have to process the image into a binary image, so how do we deal with it? In the picture, instead of setting the threshold to 0, let the computer automatically identify the optimal threshold!

[En]

Second, the results are as follows:

[En]

Here, a little friend asked, what are you doing? the purpose of what we are doing is to take out the area we want. At present, we find that the first block and the fourth block have been connected, and the middle second block and the third block have not been connected, so let’s do the closing operation again.

thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)
cv_show('thresh',thresh)


[En]

A closed operation is being performed. Because we found that there is still a black vacancy in some places, we have to fill it all with 255, that is, white.

threshCnts, hierarchy = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)

cnts = threshCnts
cur_img = image.copy()
cv2.drawContours(cur_img,cnts,-1,(0,0,255),3)
cv_show('img',cur_img)


[En]

After the preprocessing, we make an outline acquisition of the original image. He uses the binary image we processed to obtain the contour information.

[En]

With so many contours, which one do I want? here we need to carry out a filtering operation. It depends on your own project. I’ll probably try it a few times.

for (i, c) in enumerate(cnts):
(x, y, w, h) = cv2.boundingRect(c)
ar = w / float(h)
if ar > 2.5 and ar < 4.0:
if (w > 40 and w < 55) and (h > 10 and h < 20):

locs.append((x, y, w, h))
locs = sorted(locs, key=lambda x:x[0])
output = []


[En]

First calculate the rectangle, and then select the appropriate area, according to the actual task, the basic here is a set of four numbers. Filter, and then sort again.

for (i, (gX, gY, gW, gH)) in enumerate(locs):

groupOutput = []
group = gray[gY - 5:gY + gH + 5, gX - 5:gX + gW + 5]
cv_show('group',group)
group = cv2.threshold(group, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
cv_show('group',group)
digitCnts,hierarchy = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
digitCnts = contours.sort_contours(digitCnts,
method="left-to-right")[0]


[En]

Traverse the four places, and then preprocess the image again, including grayscale, binary, outline, sorting, and then find one of the four numbers to display.

    for c in digitCnts:
(x, y, w, h) = cv2.boundingRect(c)
roi = group[y:y + h, x:x + w]
roi = cv2.resize(roi, (57, 88))
cv_show('roi',roi)
scores = []
for (digit, digitROI) in digits.items():
result = cv2.matchTemplate(roi, digitROI,
cv2.TM_CCOEFF)
(_, score, _, _) = cv2.minMaxLoc(result)
scores.append(score)
groupOutput.append(str(np.argmax(scores)))


[En]

After going through these numbers, we have to make a comparison with the template and come up with the best result.

[En]

Finally, we print out the score in the original image:

[En]

Perfect! This is all the contents of our bank card recognition project, and in order to consolidate this knowledge, we did a later stage of license plate recognition to be roughly consistent with this operation. We will update it later.

🔎 支持：🎁🎁🎁 如果觉得博主的文章还不错或者您用得到的话，可以免费的关注一下博主，如果三连收藏支持就更好啦！这就是给予我最大的支持！

Original: https://blog.csdn.net/m0_37623374/article/details/125452464
Author: 吃猫的鱼python
Title: 计算机视觉项目-银行卡卡号自动识别

(0)