外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

文章目录

欢迎使用csdn下载, 好让我得点积分哈哈哈哈😆
如果下载不了可以用以下两个网盘分享的链接

夸克网盘链接 提取码:JYUN
百度网盘链接 提取码:0fnp

三个数据集已经经过处理, 大小 resize 为 (48, 48), 并且分好了训练集和测试集, 有需要自取噢

外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类
外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

; 2. Code

2-1. LBP and HOG

a. 读取数据集

def readData(dataName, label2id):
    X, Y = [], []
    path = f'./{dataName}/train'
    for label in os.listdir(path):
        for image in os.listdir(os.path.join(path, label)):
            img = cv2.imread(os.path.join(path, label, image), cv2.IMREAD_GRAYSCALE)
            img = img / 255.0
            X.append(img)
            Y.append(label2id[label])
    return X, Y

b. LBP

LBP算法内容主要参考知乎, 代码重新写了如下:

def lbpSingle(img):
    h, w = img.shape

    temp_1 = np.zeros(img.shape)
    for i in range(1, h - 1):
        for j in range(1, w - 1):

            temp_2 = (img[i - 1:i + 2, j - 1:j + 2] > img[i][j]).astype(np.int8)

            temp_2 = temp_2.reshape(9)

            temp_2 = np.delete(temp_2, 4)

            temp_2[3], temp_2[4] = temp_2[4], temp_2[3]
            temp_2[5], temp_2[7] = temp_2[7], temp_2[5]
            temp_2 = ''.join('%s' % i for i in temp_2)

            temp_1[i][j] = int(temp_2, 2)

    return [temp_1.flatten()]

def lbpBatch(imgs):
    imgsFeatureList = []
    for img in imgs:
        imgsFeatureList.append(lbpSingle(img)[0])
    return imgsFeatureList

c. HOG

HOG算法直接调用sklearn

def hogSingle(img):
    feature, _ = hog(img, orientations=9, pixels_per_cell=(6, 6), cells_per_block=(6, 6),
                     block_norm='L2-Hys', visualize=True)
    return [feature]

def hogBatch(imgs):
    featureList = []
    for img in imgs:
        feature, _ = hog(img, orientations=9, pixels_per_cell=(6, 6), cells_per_block=(6, 6),
                         block_norm='L2-Hys', visualize=True)
        featureList.append(feature)
    return featureList

d. 分类模型

def prepareModel(name):
    if name == 'svm':
        m = sklearn.svm.SVC(C=2, kernel='rbf', gamma=10, decision_function_shape='ovr')
    elif name == 'knn':
        m = KNeighborsClassifier(n_neighbors=1)
    elif name == 'dt':
        m = DecisionTreeClassifier()
    elif name == 'nb':
        m = GaussianNB()
    elif name == 'lg':
        m = LogisticRegression()
    else:
        m = RandomForestClassifier(n_estimators=180, random_state=0)
    return m

2-2. CNN

a. 准备数据集


whichDataSet = 'CK'
trainDir = f'./{whichDataSet}/train/'
trainingDataGenerator = ImageDataGenerator(
    rescale=1. / 255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    validation_split=0.25,
    horizontal_flip=True,
    fill_mode='nearest'
)
trainGenerator = trainingDataGenerator.flow_from_directory(
    trainDir, subset='training', target_size=(48, 48), class_mode='categorical'
)
validGenerator = trainingDataGenerator.flow_from_directory(
    trainDir, subset='validation', target_size=(48, 48), class_mode='categorical'
)

b. 准备模型

这里用的是三层 CNN:


model = tf.keras.models.Sequential([

    Conv2D(16, (5, 5), activation='relu', input_shape=(48, 48, 3), padding='same'),
    MaxPooling2D(2, 2),

    Conv2D(32, (5, 5), activation='relu', padding='same'),
    MaxPooling2D(2, 2),

    Conv2D(32, (5, 5), activation='relu', padding='same'),
    MaxPooling2D(2, 2),

    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.5),
    Dense(7, activation='softmax')
])
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])

c. 准备测试

读取test文件夹下没有标签的图片, 然后批量传进训练好的模型, 预测后将结果输出到一个csv文件


model = load_model(f'./{whichDataSet}/{whichDataSet}.h5')
with open(f'./{whichDataSet}/class2id.pkl', 'rb') as f:
    class2id = pickle.load(f)
f.close()
fileList = os.listdir(f'./{whichDataSet}/test/')
testSet = []
for file in fileList:
    img = image.load_img(f'./{whichDataSet}/test/{file}', target_size=(48, 48))
    testSet.append(image.img_to_array(img))
testSet = np.array(testSet)

modelOutput = model.predict(testSet, batch_size=10)

getLabel = np.argmax(modelOutput, axis=1).flatten()
df = pd.DataFrame(columns=['file', 'label'])
for i in range(len(getLabel)):
    df.append({
        'file': fileList[i], 'label': class2id[getLabel[i]]
    }, ignore_index=True)
df.to_csv(f'./{whichDataSet}/result.csv')

完整代码

完整代码在gitee上,点击跳转, 有需要自取, 欢迎⭐️star⭐️!

由于这个代码只是帮别人做的一个小毕设, 所以关于里面的 HOG和LBP 算法都没有深入了解, 因此这篇blog只是记录一下, 顺便分享一下数据集和代码~
如果数据集下载不了的欢迎底下留学邮箱地址, 看到后我会通过邮件发送给你

外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

Original: https://blog.csdn.net/JackyAce6880/article/details/124451551
Author: Caffiny
Title: 外包 | LBP/HOG/CNN 实现对 CK/jaffe/fer2013 人脸表情数据集分类

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/663090/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球