# 基于深度学习的天气识别算法对比研究-TensorFlow实现-卷积神经网络（CNN） | 第1例（内附源码+数据）

🥧 我的环境

• 语言环境：Python3
• 深度学习环境：TensorFlow2

🥂 相关教程

[En]

It is recommended that you take a look at the following introductory article before studying this article, so that you can better understand this article:

🍰 重点说明：本文为大家准备了多个算法进行对比分析，每一个算法的学习率都是独立的，你可以自由调整。并且为你提供了 &#x51C6;&#x786E;&#x7387;&#xFF08;Accuracy&#xFF09;&#x635F;&#x5931;&#xFF08;Loss&#xFF09;&#x53EC;&#x56DE;&#x7387;&#xFF08;recall&#xFF09;&#x7CBE;&#x786E;&#x7387;&#xFF08;precision&#xFF09; 以及 AUC&#x503C; 等众多指标的对比分析，你只需要选择需要对比的模型、指标以及数据集即可进行相应的对比分析

🍡 在本代码中你还可以探究的内容如下：

• 相同学习率对不同模式的影响
[En]

the impact of the same learning rate on different models*

• 同一模型在不同学习速度下的表现
[En]

performance of the same model at different learning rates*

• Dropout层的作用（解决过拟合问题）

🍳 效果展示：

[En]

Our code flow chart is as follows:

## ; 🍔 前期准备工作

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
tf.config.experimental.set_memory_growth(gpus[0], True)
tf.config.set_visible_devices([gpus[0]],"GPU")

plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False

print(gpus)

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]


### 🥗 导入数据

"""

"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"./2-DataSet/",
validation_split=0.2,
subset="training",
label_mode = "categorical",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)

Found 5531 files belonging to 9 classes.

Using 4425 files for training.


"""

"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
"./2-DataSet/",
validation_split=0.2,
subset="validation",
label_mode = "categorical",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)

Found 5531 files belonging to 9 classes.

Using 1106 files for validation.


class_names = train_ds.class_names
print(class_names)

['dew', 'fogsmog', 'frost', 'hail', 'lightning', 'rain', 'rainbow', 'rime', 'snow']

AUTOTUNE = tf.data.AUTOTUNE

def train_preprocessing(image,label):
return (image/255.0,label)

train_ds = (
train_ds.cache()
.map(train_preprocessing)
.prefetch(buffer_size=AUTOTUNE)
)

val_ds = (
val_ds.cache()
.map(train_preprocessing)
.prefetch(buffer_size=AUTOTUNE)
)

plt.figure(figsize=(14, 8))

for images, labels in train_ds.take(1):
for i in range(28):
plt.subplot(4, 7, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)

plt.imshow(images[i])
plt.title(class_names[np.argmax(labels[i])])

plt.show()


### 🥙 设置评估指标metrics

metrics = [
tf.keras.metrics.CategoricalAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc')
]


## 🍟 定义模型

### 🥪 VGG16模型

from tensorflow.keras import layers, models, Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout,BatchNormalization,Activation

vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',
include_top=False,

input_shape=(img_width, img_height, 3),
pooling='max')
for layer in vgg16_base_model.layers:
layer.trainable = True

X = vgg16_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)

initial_learning_rate = 1e-4
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=70,
decay_rate=0.92,
staircase=True)

loss='categorical_crossentropy',
metrics = metrics)



### 🌮 ResNet50模型


resnet50_base_model = tf.keras.applications.resnet50.ResNet50(weights='imagenet',
include_top=False,
input_shape=(img_width, img_height, 3),
pooling='max')
for layer in resnet50_base_model.layers:
layer.trainable = True

X = resnet50_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
resnet50_model = Model(inputs=resnet50_base_model.input, outputs=output)

loss='categorical_crossentropy',
metrics= metrics)



### 🌯 InceptionV3模型


InceptionV3_base_model = tf.keras.applications.inception_v3.InceptionV3(weights='imagenet',
include_top=False,
input_shape=(img_width, img_height, 3),
pooling='max')
for layer in InceptionV3_base_model.layers:
layer.trainable = True

X = InceptionV3_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
InceptionV3_model = Model(inputs=InceptionV3_base_model.input, outputs=output)

loss='categorical_crossentropy',
metrics= metrics)



### 🥫 DenseNet121算法模型


DenseNet121_base_model = tf.keras.applications.densenet.DenseNet121(weights='imagenet',
include_top=False,
input_shape=(img_width, img_height, 3),
pooling='max')
for layer in DenseNet121_base_model.layers:
layer.trainable = True

X = DenseNet121_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
DenseNet121_model = Model(inputs=DenseNet121_base_model.input, outputs=output)

loss='categorical_crossentropy',
metrics= metrics)



### 🍛 LeNet-5模型

LeNet5_model = keras.Sequential([

keras.layers.Conv2D(6, 5),
keras.layers.MaxPooling2D(pool_size=2, strides=2),
keras.layers.ReLU(),

keras.layers.Conv2D(16, 5),
keras.layers.MaxPooling2D(pool_size=2, strides=2),
keras.layers.ReLU(),

keras.layers.Flatten(),

keras.layers.Dense(500, activation='relu'),

keras.layers.Dense(100, activation='relu'),

keras.layers.Dense(len(class_names), activation='softmax')
])
LeNet5_model.build(input_shape=(batch_size, img_width,img_height,3))

LeNet5_model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics= metrics)


### 🍜 MobileNetV2算法模型


MobileNetV2_base_model = tf.keras.applications.mobilenet_v2.MobileNetV2(weights='imagenet',
include_top=False,
input_shape=(img_width, img_height, 3),
pooling='max')
for layer in MobileNetV2_base_model.layers:
layer.trainable = True

X = MobileNetV2_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
MobileNetV2_model = Model(inputs=MobileNetV2_base_model.input, outputs=output)

loss='categorical_crossentropy',
metrics= metrics)



### 🥡 EfficientNetB0算法模型

EfficientNetB0_base_model = tf.keras.applications.efficientnet.EfficientNetB0(weights='imagenet',
include_top=False,
input_shape=(img_width, img_height, 3),
pooling='max')
for layer in EfficientNetB0_base_model.layers:
layer.trainable = True

X = EfficientNetB0_base_model.output

output = Dense(len(class_names), activation='softmax')(X)
EfficientNetB0_model = Model(inputs=EfficientNetB0_base_model.input, outputs=output)

loss='categorical_crossentropy',
metrics= metrics)



## 🌭 训练模型

vgg16_history  = vgg16_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 35s 184ms/step - loss: 0.8009 - accuracy: 0.7313 - precision: 0.8380 - recall: 0.6466 - auc: 0.9579 - val_loss: 0.4339 - val_accuracy: 0.8472 - val_precision: 0.8938 - val_recall: 0.8065 - val_auc: 0.9867
Epoch 2/30
139/139 [==============================] - 18s 133ms/step - loss: 0.3605 - accuracy: 0.8744 - precision: 0.9090 - recall: 0.8447 - auc: 0.9904 - val_loss: 0.3253 - val_accuracy: 0.8933 - val_precision: 0.9251 - val_recall: 0.8599 - val_auc: 0.9924
......

Epoch 30/30
139/139 [==============================] - 19s 134ms/step - loss: 7.2972e-05 - accuracy: 1.0000 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - val_loss: 0.3784 - val_accuracy: 0.9394 - val_precision: 0.9410 - val_recall: 0.9367 - val_auc: 0.9852

resnet50_history  = resnet50_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 21s 124ms/step - loss: 1.5570 - accuracy: 0.8165 - precision: 0.8231 - recall: 0.8134 - auc: 0.9424 - val_loss: 8.3198 - val_accuracy: 0.0642 - val_precision: 0.0643 - val_recall: 0.0642 - val_auc: 0.4878
Epoch 2/30
139/139 [==============================] - 16s 115ms/step - loss: 0.0972 - accuracy: 0.9756 - precision: 0.9762 - recall: 0.9749 - auc: 0.9969 - val_loss: 12.0035 - val_accuracy: 0.1646 - val_precision: 0.1646 - val_recall: 0.1646 - val_auc: 0.5218
&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;
Epoch 30/30
139/139 [==============================] - 16s 115ms/step - loss: 3.2757e-06 - accuracy: 1.0000 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - val_loss: 0.4380 - val_accuracy: 0.9358 - val_precision: 0.9365 - val_recall: 0.9340 - val_auc: 0.9823

InceptionV3_history  = InceptionV3_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 21s 109ms/step - loss: 0.6351 - accuracy: 0.8291 - precision: 0.8708 - recall: 0.7968 - auc: 0.9752 - val_loss: 0.4674 - val_accuracy: 0.8553 - val_precision: 0.8660 - val_recall: 0.8418 - val_auc: 0.9839
Epoch 2/30
139/139 [==============================] - 13s 92ms/step - loss: 0.0451 - accuracy: 0.9876 - precision: 0.9907 - recall: 0.9835 - auc: 0.9999 - val_loss: 0.3189 - val_accuracy: 0.9105 - val_precision: 0.9201 - val_recall: 0.9060 - val_auc: 0.9904
......

Epoch 30/30
139/139 [==============================] - 13s 92ms/step - loss: 1.6310e-05 - accuracy: 1.0000 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - val_loss: 0.3224 - val_accuracy: 0.9231 - val_precision: 0.9264 - val_recall: 0.9222 - val_auc: 0.9884

DenseNet121_history  = DenseNet121_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 30s 158ms/step - loss: 0.9439 - accuracy: 0.8006 - precision: 0.8226 - recall: 0.7872 - auc: 0.9602 - val_loss: 0.4637 - val_accuracy: 0.8725 - val_precision: 0.8860 - val_recall: 0.8644 - val_auc: 0.9817
Epoch 2/30
139/139 [==============================] - 19s 135ms/step - loss: 0.0615 - accuracy: 0.9792 - precision: 0.9812 - recall: 0.9772 - auc: 0.9994 - val_loss: 0.3314 - val_accuracy: 0.9105 - val_precision: 0.9150 - val_recall: 0.9051 - val_auc: 0.9897
&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;
Epoch 30/30
139/139 [==============================] - 19s 135ms/step - loss: 3.9534e-05 - accuracy: 1.0000 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - val_loss: 0.2790 - val_accuracy: 0.9304 - val_precision: 0.9318 - val_recall: 0.9259 - val_auc: 0.9911

LeNet5_history  = LeNet5_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 3s 17ms/step - loss: 1.5704 - accuracy: 0.5485 - precision: 0.8639 - recall: 0.3410 - auc: 0.8849 - val_loss: 1.2588 - val_accuracy: 0.5696 - val_precision: 0.7702 - val_recall: 0.3879 - val_auc: 0.9004
Epoch 2/30
139/139 [==============================] - 2s 15ms/step - loss: 1.1065 - accuracy: 0.6350 - precision: 0.8069 - recall: 0.4495 - auc: 0.9224 - val_loss: 1.0188 - val_accuracy: 0.6754 - val_precision: 0.7986 - val_recall: 0.5054 - val_auc: 0.9342
......

Epoch 30/30
139/139 [==============================] - 2s 15ms/step - loss: 0.3052 - accuracy: 0.9306 - precision: 0.9611 - recall: 0.8884 - auc: 0.9943 - val_loss: 0.7361 - val_accuracy: 0.7649 - val_precision: 0.8367 - val_recall: 0.7134 - val_auc: 0.9646

MobileNetV2_history  = MobileNetV2_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 19s 117ms/step - loss: 1.3687 - accuracy: 0.7252 - precision: 0.7581 - recall: 0.7064 - auc: 0.9281 - val_loss: 1.4920 - val_accuracy: 0.7043 - val_precision: 0.7203 - val_recall: 0.6917 - val_auc: 0.9184
Epoch 2/30
139/139 [==============================] - 15s 109ms/step - loss: 0.0676 - accuracy: 0.9792 - precision: 0.9798 - recall: 0.9765 - auc: 0.9992 - val_loss: 1.4331 - val_accuracy: 0.7260 - val_precision: 0.7380 - val_recall: 0.7206 - val_auc: 0.9241
&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;&#x3002;
Epoch 30/30
139/139 [==============================] - 15s 108ms/step - loss: 5.0026e-05 - accuracy: 1.0000 - precision: 1.0000 - recall: 1.0000 - auc: 1.0000 - val_loss: 0.4606 - val_accuracy: 0.8933 - val_precision: 0.8955 - val_recall: 0.8915 - val_auc: 0.9807 1.0000 - precision: 1.0000 - recall: 1.0000 -

EfficientNetB0_history = EfficientNetB0_model.fit(train_ds, epochs=epochs, verbose=1, validation_data=val_ds)

Epoch 1/30
139/139 [==============================] - 28s 162ms/step - loss: 1.6410 - accuracy: 0.6706 - precision: 0.7080 - recall: 0.6478 - auc: 0.9135 - val_loss: 9.5856 - val_accuracy: 0.0696 - val_precision: 0.0696 - val_recall: 0.0696 - val_auc: 0.4577
Epoch 2/30
139/139 [==============================] - 21s 151ms/step - loss: 0.3428 - accuracy: 0.8856 - precision: 0.9044 - recall: 0.8723 - auc: 0.9895 - val_loss: 3.5993 - val_accuracy: 0.0832 - val_precision: 0.0911 - val_recall: 0.0642 - val_auc: 0.5274
......

Epoch 30/30
139/139 [==============================] - 21s 151ms/step - loss: 0.0067 - accuracy: 0.9989 - precision: 0.9991 - recall: 0.9984 - auc: 1.0000 - val_loss: 0.9306 - val_accuracy: 0.7920 - val_precision: 0.8105 - val_recall: 0.7848 - val_auc: 0.9535


## 🍿 结果分析

### 🦪 准确率对比分析

plot_model(["VGG16","InceptionV3","DenseNet121","LeNet5","MobileNetV2"],
[vgg16_history,InceptionV3_history,DenseNet121_history,LeNet5_history,MobileNetV2_history],
["accuracy","val_accuracy"],
marker = ".")


### 🍣 损失对比分析

plot_model(["VGG16","resnet50","InceptionV3","DenseNet121","LeNet5","MobileNetV2"],
[vgg16_history,resnet50_history,InceptionV3_history,DenseNet121_history,LeNet5_history,MobileNetV2_history],
["loss","val_loss"],
marker = "1")


### 🍤 召回率对比分析

plot_model(["VGG16","resnet50","InceptionV3","DenseNet121","LeNet5","MobileNetV2"],
[vgg16_history,resnet50_history,InceptionV3_history,DenseNet121_history,LeNet5_history,MobileNetV2_history],
["recall","val_recall"],
marker = "")


### 🍘 混淆矩阵

val_pre   = []
val_label = []

for images, labels in val_ds:
for image, label in zip(images, labels):

img_array = tf.expand_dims(image, 0)
prediction = DenseNet121_model.predict(img_array)

val_pre.append(np.argmax(prediction))
val_label.append([np.argmax(one_hot)for one_hot in [label]])

plot_cm(val_label, val_pre)


### 🍙 评估指标生成

• support：当前行的类别在测试数据中的样本总量；
• precision：被判定为正例（反例）的样本中，真正的正例样本（反例样本）的比例，精度=正确预测的个数(TP)/被预测正确的个数(TP+FP)。
• recall：被正确分类的正例（反例）样本，占所有正例（反例）样本的比例，召回率=正确预测的个数(TP)/预测个数(TP+FN)。
• f1-score: 精确率和召回率的调和平均值，F1 = 2 _精度_召回率/(精度+召回率)。
• accuracy：表示准确率，也即正确预测样本量与总样本量的比值。
• macro avg：表示宏平均，表示所有类别对应指标的平均值。
• weighted avg：表示带权重平均，表示类别样本占总样本的比重与对应指标的乘积的累加和。
from sklearn import metrics

def test_accuracy_report(model):
print(metrics.classification_report(val_label, val_pre, target_names=class_names))
score = model.evaluate(val_ds, verbose=0)
print('Loss: %s, Accuracy:' % score[0], score[1])

test_accuracy_report(vgg16_model)

              precision    recall  f1-score   support

dew       0.97      0.96      0.97       135
fogsmog       0.94      0.98      0.96       182
frost       0.88      0.87      0.87        97
hail       0.94      0.96      0.95       125
lightning       0.99      1.00      0.99        77
rain       0.95      0.89      0.92       107
rainbow       1.00      0.98      0.99        50
rime       0.90      0.94      0.92       227
snow       0.86      0.77      0.82       106

accuracy                           0.93      1106
macro avg       0.94      0.93      0.93      1106
weighted avg       0.93      0.93      0.93      1106

Loss: 0.3783927857875824, Accuracy: 0.9394213557243347


### 🍥 Loss/accuracy/Precision/Recall对比分析

colors = plt.rcParams['axes.prop_cycle'].by_key()['color']

plt.figure(figsize=(12,8))
def plot_metrics(history):
metrics = ['loss', 'accuracy', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[1], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],color=colors[2],
linestyle="--", label='Val')

plt.xlabel('Epoch',fontsize=12)
plt.ylabel(name,fontsize=12)

plt.legend()

plot_metrics(MobileNetV2_history)


### 🥮 AUC

import sklearn
from tensorflow.keras.utils import to_categorical

def plot_roc( labelsـ, predictions):
fpr = dict()
tpr = dict()
roc_auc = dict()
temp = class_names

for i, item in enumerate(temp):
fpr[i], tpr[i], _ = sklearn.metrics.roc_curve(labelsـ[:, i], predictions[:, i])
roc_auc[i] = sklearn.metrics.auc(fpr[i], tpr[i])

plt.subplots(figsize=(7, 7))

for i, item in enumerate(temp):
plt.plot(100*fpr[i], 100*tpr[i], label=f'ROC curve {item} (area = {roc_auc[i]:.2f})', linewidth=2, linestyle="--")

plt.xlabel('False positives [%]',fontsize=12)
plt.ylabel('True positives [%]',fontsize=12)
plt.legend(loc="lower right")
plt.grid(True)

plot_roc(to_categorical(val_label), to_categorical(val_pre))


## 🍨 指定图片进行预测

from PIL import Image

img = Image.open("./2-DataSet/fogsmog/4083.jpg")
image = tf.image.resize(img, [img_height, img_width])/255.0

img_array = tf.expand_dims(image, 0)

predictions = vgg16_model.predict(img_array)
print("预测结果为：",class_names[np.argmax(predictions)])

&#x9884;&#x6D4B;&#x7ED3;&#x679C;&#x4E3A;&#xFF1A; fogsmog


Original: https://blog.csdn.net/qq_38251616/article/details/124765101
Author: K同学啊
Title: 基于深度学习的天气识别算法对比研究-TensorFlow实现-卷积神经网络（CNN） | 第1例（内附源码+数据）

(0)

### 大家都在看

• #### java父类与子类之间的转化

父类与子类 总的来说一句 实例化是谁，谁提供成员变量以及属性。谁声明，用谁的变量与函数的范围。这个可能与对象的实现机制有关，后面扒一扒java底层和jvm应该好理解。 关系 &am…

人工智能 2023年6月27日
0135

人工智能 2023年5月26日
0227
• #### 机器学习实战练手项目

前导 更多文章代码详情可查看博主个人网站：https://www.iwtmbtly.com/ 下文用到的数据集和代码可以从这里下载《数据集》 机器学习是一种从数据生成规则、发现模型…

人工智能 2023年6月23日
0142
• #### 股票因子扩展2（双神因子计算）——从零到实盘5

前文记录了涨停因子的实现，本文记录双神因子的实现。 双神本质上就是间隔的两个涨停，网上也有人称之为双龙。 ; 主要代码分析 新建源文件，命名为data_center_v4.py，全…

人工智能 2023年7月6日
0230
• #### 【学习笔记】深度神经网络基础

1. 监督学习和无监督学习 监督学习简单定义：提供一组输入数据和其对应的标签数据，然后搭建一个模型，让模型在通过训练后准确地找到输入数据和标签数据之间的最优映射关系，在输入新的数据…

人工智能 2023年6月2日
0151
• #### 数据分析（3）数据重构

人工智能 2023年7月8日
0180
• #### Ubuntu20.04+RTX3090ti+cuda11.6+cudnn8.4.1+pytorch安装过程记录

为了快速配置基于pytorch的深度学习工作环境，现对Ubuntu20.04 +RTX3090ti +cuda11.6+ cudnn8.4.1 +pytorch安装过程进行简要记录…

人工智能 2023年6月23日
0207
• #### 非常详细的相机标定原理、步骤（一）

目录 一、什么是相机标定 二、坐标系 1.世界坐标系(word Coordinate) 2.相机坐标系(camera coordinate) 3.世界坐标系到相机坐标系转换 三、总…

人工智能 2023年7月27日
0164
• #### 语义分割系列26-VIT+SETR——Transformer结构如何在语义分割中大放异彩

SETR：《Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspectivewith Transfo…

人工智能 2023年6月24日
0172
• #### [自动驾驶-目标检测] C++ PCL 地面点云分割

回答1： l是Point Cloud Library的缩写，是一个功能强大的点云库，提供了多种点云处理算法。其中，点云平面拟合是 l中比较基础的一个算法。 点云平面拟合的目的是根据…

人工智能 2023年6月17日
0199
• #### Ubuntu 装卸Opencv

ubuntu中卸载opencv的方法：1、打开ubuntu；2、找到当初安装opencv的build目录，进入该build目录执行卸载操作；3、通过rm命令清理/usr中所有ope…

人工智能 2023年5月26日
0177
• #### Pandas 模块 – 读写(4)-从数据库读写数据-read_sql/to_sql

人工智能 2023年7月16日
0168
• #### Prophet模型的简介以及案例分析

目录 前言 一、Prophet安装以及简介 二、适用场景 三、算法的输入输出 四、算法原理 五、使用时可以设置的参数 六、学习资料参考 七、模型应用 * 7-1、股票收盘价格预测 …

人工智能 2023年6月16日
0189
• #### python sdk是什么意思_SDK 和 API 的区别是什么？

我觉得上面的好评答案不是很好，至少我看了之后感觉有点迷糊，这和我遇到的实际情况不太符合。 [En] I think the high praise answer above is …

人工智能 2023年5月27日
0149
• #### python—利用朴素贝叶斯分类器对文本进行分类

题目： 1.已知一个文本集合为： [[‘my’, ‘dog’,’has’,’false&#821…

人工智能 2023年7月2日
0138
• #### 一、PyQt5实现Python界面设计_QtWidgets （第一个窗体界面）

目录 一、介绍 二、实例部分 （1）第一个主窗口应用 （2）让窗口居中显示 （3）窗体的基本属性（窗体属性，工作区属性） （4）设置窗体图标 一、介绍 1、PyQt是一个创建GUI…

人工智能 2023年7月6日
0156