# 【电子羊的奇妙冒险】初试深度学习（2）

[En]

The content of this issue is a little miscellaneous, with basic knowledge and code practice.

## 卷积神经网络

### ; 卷积定义

[En]

Some machine learning libraries call it convolution. In fact, in the neural network, convolution refers to this function (not the convolution function in the mathematical sense).

[En]

Only one type of feature can be extracted from a single convolution kernel.

feature map 特征图是卷积层的输出的别名，它由多个通道组成，每个通道代表通过卷积提取的某种特征。

## 深度学习实战：mnist手写数字识别

github地址：

https://github.com/pytorch/examples/tree/main/mnist

from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR

class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)

def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output

def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break

model.eval()
test_loss = 0
correct = 0
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()

print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(

def main():

parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
help='input batch size for training (default: 64)')
help='input batch size for testing (default: 1000)')
help='number of epochs to train (default: 14)')
help='learning rate (default: 1.0)')
help='Learning rate step gamma (default: 0.7)')
help='disables CUDA training')
help='quickly check a single pass')
help='random seed (default: 1)')
help='how many batches to wait before logging training status')
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()

torch.manual_seed(args.seed)

device = torch.device("cuda" if use_cuda else "cpu")

train_kwargs = {'batch_size': args.batch_size}
test_kwargs = {'batch_size': args.test_batch_size}
if use_cuda:
cuda_kwargs = {'num_workers': 1,
'pin_memory': True,
'shuffle': True}
train_kwargs.update(cuda_kwargs)
test_kwargs.update(cuda_kwargs)

transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)

model = Net().to(device)

scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
scheduler.step()

if args.save_model:
torch.save(model.state_dict(), "mnist_cnn.pt")

if __name__ == '__main__':
main()


Basic MNIST Example

pip install -r requirements.txt
python main.py



## 深度学习基础

### CUDA使用GPU加速

• CPU：擅长流程控制和逻辑处理，不规则数据结构，不可预测存储结构，单线程程序，分支密集型算法
• GPU：擅长数据并行计算，规则数据结构，可预测存储模式。

https://blog.csdn.net/sru_alo/article/details/93539633


int main()
{
int deviceCount;
cudaGetDeviceCount(&deviceCount);
for(int i=0;i<deviceCount;i++)
{
cudaGetDeviceProperties(&devProp, i);
std::cout << "使用GPU device " << i << ": " << devProp.name << std::endl;
std::cout << "设备全局内存总量： " << devProp.totalGlobalMem / 1024 / 1024 << "MB" << std::endl;
std::cout << "SM的数量：" << devProp.multiProcessorCount << std::endl;
std::cout << "每个线程块的共享内存大小：" << devProp.sharedMemPerBlock / 1024.0 << " KB" << std::endl;
std::cout << "每个线程块的最大线程数：" << devProp.maxThreadsPerBlock << std::endl;
std::cout << "设备上一个线程块（Block）种可用的32位寄存器数量： " << devProp.regsPerBlock << std::endl;
std::cout << "每个EM的最大线程数：" << devProp.maxThreadsPerMultiProcessor << std::endl;
std::cout << "每个EM的最大线程束数：" << devProp.maxThreadsPerMultiProcessor / 32 << std::endl;
std::cout << "设备上多处理器的数量： " << devProp.multiProcessorCount << std::endl;
std::cout << "======================================================" << std::endl;

}
return 0;
}



nvcc test1.cu -o test1


## 基于神经网络求一元高次方程近似解

https://taylanbil.github.io/polysolver

[En]

A preliminary study on the adjustment of model training times and fitting results:

### ; 高次——（4~16次）

[En]

At this point, it can be seen that after a higher number of training,

[En]

The nuclear density map showed obvious changes.

[En]

Keep the model obtained by ten times of training, and control variables to adjust the test data.

### 调参

(reshape(X[100:150])

(reshape(X[100:110])

(reshape(X[100:102])
import torch

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

#inputs,target = inputs.to(device),target.to(device)

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
MIN_ROOT = -1
MAX_ROOT = 1

def make(n_samples, n_degree):
global MIN_ROOT, MAX_ROOT
y = np.random.uniform(MIN_ROOT, MAX_ROOT, (n_samples, n_degree))
y.sort(axis=1)
X = np.array([np.poly(_) for _ in y])
return X, y

toy case
X, y = make(1, 2)

N_SAMPLES = 100000
DEGREE = 5
X_train, y_train = make(int(N_SAMPLES*0.8), DEGREE)
X_test, y_test = make(int(N_SAMPLES*0.2), DEGREE)

import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'

def reshape(array):
return np.expand_dims(array, -1)

from keras.models import Sequential
from keras.layers import LSTM, RepeatVector, Dense, TimeDistributed

hidden_size = 128

model = Sequential()

ENCODER PART OF SEQ2SEQ

DECODER PART OF SEQ2SEQ
model.add(RepeatVector(DEGREE))  # this determines the length of the output sequence

model.compile(loss='mean_absolute_error',
metrics=['mae'])
#model.to(device)
#print(model.summary())
'''
BATCH_SIZE = 128
model.fit(reshape(X_train),
reshape(y_train),
batch_size=BATCH_SIZE,
epochs=3,
verbose=1,
validation_data=(reshape(X_test),
reshape(y_test)))

y_pred = model.predict(reshape(X_test))
y_pred = np.squeeze(y_pred)

#% matplotlib inline

def get_evals(polynomials, roots):
evals = [
[np.polyval(poly, r) for r in root_row]
for (root_row, poly) in zip(roots, polynomials)
]
evals = np.array(evals).ravel()
return evals

def compare_to_random(y_pred, y_test, polynomials):
y_random = np.random.uniform(MIN_ROOT, MAX_ROOT, y_test.shape)
y_random.sort(axis=1)

fig, axes = plt.subplots(1, 2, figsize=(12, 6))
ax = axes[0]
ax.hist(np.abs((y_random - y_test).ravel()),
alpha=.4, label='random guessing')
ax.hist(np.abs((y_pred - y_test).ravel()),
color='r', alpha=.4, label='model predictions')
ax.set(title='Histogram of absolute errors',
ylabel='count', xlabel='absolute error')
ax.legend(loc='best')

ax = axes[1]
random_evals = get_evals(polynomials, y_random)
predicted_evals = get_evals(polynomials, y_pred)
pd.Series(random_evals).plot.kde(ax=ax, label='random guessing kde')
pd.Series(predicted_evals).plot.kde(ax=ax, color='r', label='model prediction kde')
title = 'Kernel Density Estimate plot\n' \
'for polynomial evaluation of (predicted) roots'
ax.set(xlim=[-.5, .5], title=title)
ax.legend(loc='best')

fig.tight_layout()

compare_to_random(y_pred, y_test, X_test)
plt.show()

'''

MAX_DEGREE = 15
MIN_DEGREE = 5
MAX_ROOT = 1
MIN_ROOT = -1
N_SAMPLES = 10000 * (MAX_DEGREE - MIN_DEGREE + 1)

def make(n_samples, max_degree, min_degree, min_root, max_root):
samples_per_degree = n_samples
n_samples = samples_per_degree * (max_degree - min_degree + 1)
X = np.zeros((n_samples, max_degree + 1))
# XXX: filling the truth labels with ZERO??? EOS character would be nice
y = np.zeros((n_samples, max_degree, 2))
for i, degree in enumerate(range(min_degree, max_degree + 1)):
y_tmp = np.random.uniform(min_root, max_root, (samples_per_degree, degree))
y_tmp.sort(axis=1)
X_tmp = np.array([np.poly(_) for _ in y_tmp])

root_slice_y = np.s_[
i * samples_per_degree:(i + 1) * samples_per_degree,
:degree,
0]
i * samples_per_degree:(i + 1) * samples_per_degree,
degree:,
1]
this_slice_X = np.s_[
i * samples_per_degree:(i + 1) * samples_per_degree,
-degree - 1:]

y[root_slice_y] = y_tmp
X[this_slice_X] = X_tmp
return X, y

def make_this():
global MAX_DEGREE, MIN_DEGREE, MAX_ROOT, MIN_ROOT, N_SAMPLES
return make(N_SAMPLES, MAX_DEGREE, MIN_DEGREE, MIN_ROOT, MAX_ROOT)

from sklearn.model_selection import train_test_split

X, y = make_this()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
'''
print('X shapes', X.shape, X_train.shape, X_test.shape)
print('y shapes', y.shape, y_train.shape, y_test.shape)
print('-' * 80)
print('This is an example root sequence')
print(y[0])
'''
hidden_size = 128
model = Sequential()

model.compile(loss='mean_absolute_error',
metrics=['mae'])

#print(model.summary())

model.predict(reshape(X_test));  # this last semi-column will suppress the output.

'''
BATCH_SIZE = 40
model.fit(reshape(X_train), y_train,
batch_size=BATCH_SIZE,
epochs=10,
verbose=1,
validation_data=(reshape(X_test), y_test))
'''
model.predict(reshape(X[100:102]))

y_pred = model.predict(reshape(X_test))
fig, ax = plt.subplots()
xlabel='predicted value',
ylabel='count')

thr = 0.5

def how_many_roots(predicted):
global thr
return np.sum(predicted[:, 1] < thr)

true_root_count = np.array(list(map(how_many_roots, y_test)))
pred_root_count = np.array(list(map(how_many_roots, y_pred)))
from collections import Counter
for key, val in Counter(true_root_count - pred_root_count).items():
print('off by {}: {} times'.format(key, val))

index = np.where(true_root_count == pred_root_count)[0]
index = np.random.choice(index, 1000, replace=False)

predicted_evals, random_evals = [], []
random_roots_list = []
predicted_roots_list = []
true_roots_list = []
for i in index:
predicted_roots = [row[0] for row in y_pred[i] if row[1] < thr]
true_roots = [row[0] for row in y_test[i] if row[1] == 0]
random_roots = np.random.uniform(MIN_ROOT, MAX_ROOT, len(predicted_roots))
random_roots = sorted(random_roots)
random_roots_list.extend(random_roots)
predicted_roots_list.extend(predicted_roots)
true_roots_list.extend(true_roots)
for predicted_root, random_root in zip(predicted_roots, random_roots):
predicted_evals.append(
np.polyval(X_test[i], predicted_root))
random_evals.append(
np.polyval(X_test[i], random_root))

assert len(true_roots_list) == len(predicted_roots_list)
assert len(random_roots_list) == len(predicted_roots_list)
true_roots_list = np.array(true_roots_list)
random_roots_list = np.array(random_roots_list)
predicted_roots_list = np.array(predicted_roots_list)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
ax = axes[0]
ax.hist(np.abs(random_roots_list - true_roots_list),
alpha=.4, label='random guessing')
ax.hist(np.abs(predicted_roots_list - true_roots_list),
color='r', alpha=.4, label='model predictions')
ax.set(title='Histogram of absolute errors',
ylabel='count', xlabel='absolute error')
ax.legend(loc='best')

ax = axes[1]
pd.Series(random_evals).plot.kde(ax=ax, label='random guessing kde')
pd.Series(predicted_evals).plot.kde(ax=ax, color='r', label='model prediction kde')
title = 'Kernel Density Estimate plot\n' \
'for polynomial evaluation of (predicted) roots'
ax.set(xlim=[-.5, .5], title=title)
ax.legend(loc='best')

fig.tight_layout()

plt.show()


ML3

## CNN实战



!gdown --id '1awF7pZ9Dz7X1jn1_QAiKN-_v56veCEKy' --output food-11.zip

Dropbox
!wget https:

MEGA
!sudo apt install megatools

Unzip the dataset.

This may take some time.

!unzip -q food-11.zip

Import necessary packages.

import numpy as np
import torch
import torch.nn as nn
import torchvision.transforms as transforms
from PIL import Image
"ConcatDataset" and "Subset" are possibly useful when doing semi-supervised learning.

from torch.utils.data import ConcatDataset, DataLoader, Subset
from torchvision.datasets import DatasetFolder

This is for the progress bar.

from tqdm.auto import tqdm

It is important to do data augmentation in training.

However, not every augmentation is useful.

train_tfm = transforms.Compose([
# Resize the image into a fixed shape (height = width = 128)
transforms.Resize((128, 128)),
# You may add some transforms here.

# ToTensor() should be the last one of the transforms.

transforms.ToTensor(),
])

We don't need augmentations in testing and validation.

All we need here is to resize the PIL image and transform it into Tensor.

test_tfm = transforms.Compose([
transforms.Resize((128, 128)),
transforms.ToTensor(),
])

Batch size for training, validation, and testing.

A greater batch size usually gives a more stable gradient.

batch_size = 128

Construct datasets.

train_set = DatasetFolder("food-11/training/labeled", loader=lambda x: Image.open(x), extensions="jpg", transform=train_tfm)
valid_set = DatasetFolder("food-11/validation", loader=lambda x: Image.open(x), extensions="jpg", transform=test_tfm)
unlabeled_set = DatasetFolder("food-11/training/unlabeled", loader=lambda x: Image.open(x), extensions="jpg", transform=train_tfm)
test_set = DatasetFolder("food-11/testing", loader=lambda x: Image.open(x), extensions="jpg", transform=test_tfm)

class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
# The arguments for commonly used modules:
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)

# input image size: [3, 128, 128]
self.cnn_layers = nn.Sequential(
nn.Conv2d(3, 64, 3, 1, 1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2, 2, 0),

nn.Conv2d(64, 128, 3, 1, 1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.MaxPool2d(2, 2, 0),

nn.Conv2d(128, 256, 3, 1, 1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.MaxPool2d(4, 4, 0),
)
self.fc_layers = nn.Sequential(
nn.Linear(256 * 8 * 8, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, 11)
)

def forward(self, x):
# input (x): [batch_size, 3, 128, 128]
# output: [batch_size, 11]

# Extract features by convolutional layers.

x = self.cnn_layers(x)

# The extracted feature map must be flatten before going to fully-connected layers.

x = x.flatten(1)

# The features are transformed by fully-connected layers to obtain the final logits.

x = self.fc_layers(x)
return x

def get_pseudo_labels(dataset, model, threshold=0.65):
# This functions generates pseudo-labels of a dataset using given model.

# It returns an instance of DatasetFolder containing images whose prediction confidences exceed a given threshold.

# You are NOT allowed to use any models trained on external data for pseudo-labeling.

device = "cuda" if torch.cuda.is_available() else "cpu"

# Make sure the model is in eval mode.

model.eval()
# Define softmax function.

softmax = nn.Softmax(dim=-1)

# Iterate over the dataset by batches.

img, _ = batch

# Forward the data
# Using torch.no_grad() accelerates the forward process.

logits = model(img.to(device))

# Obtain the probability distributions by applying softmax on logits.

probs = softmax(logits)

# ---------- TODO ----------
# Filter the data and construct a new dataset.

# # Turn off the eval mode.

model.train()
return dataset

"cuda" only when GPUs are available.

device = "cuda" if torch.cuda.is_available() else "cpu"

Initialize a model, and put it on the device specified.

model = Classifier().to(device)
model.device = device

For the classification task, we use cross-entropy as the measurement of performance.

criterion = nn.CrossEntropyLoss()

Initialize optimizer, you may fine-tune some hyperparameters such as learning rate on your own.

The number of training epochs.

n_epochs = 80

Whether to do semi-supervised learning.

do_semi = False

for epoch in range(n_epochs):
# ---------- TODO ----------
# In each epoch, relabel the unlabeled dataset for semi-supervised learning.

# Then you can combine the labeled dataset and pseudo-labeled dataset for the training.

if do_semi:
# Obtain pseudo-labels for unlabeled data using trained model.

pseudo_set = get_pseudo_labels(unlabeled_set, model)

# Construct a new dataset and a data loader for training.

# This is used in semi-supervised learning only.

concat_dataset = ConcatDataset([train_set, pseudo_set])

# ---------- Training ----------
# Make sure the model is in train mode before training.

model.train()

# These are used to record information in training.

train_loss = []
train_accs = []

# Iterate the training set by batches.

# A batch consists of image data and corresponding labels.

imgs, labels = batch

# Forward the data. (Make sure data and model are on the same device.)
logits = model(imgs.to(device))

# Calculate the cross-entropy loss.

# We don't need to apply softmax before computing cross-entropy as it is done automatically.

loss = criterion(logits, labels.to(device))

# Gradients stored in the parameters in the previous step should be cleared out first.

# Compute the gradients for parameters.

loss.backward()

# Clip the gradient norms for stable training.

# Update the parameters with computed gradients.

optimizer.step()

# Compute the accuracy for current batch.

acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean()

# Record the loss and accuracy.

train_loss.append(loss.item())
train_accs.append(acc)

# The average loss and accuracy of the training set is the average of the recorded values.

train_loss = sum(train_loss) / len(train_loss)
train_acc = sum(train_accs) / len(train_accs)

# Print the information.

print(f"[ Train | {epoch + 1:03d}/{n_epochs:03d} ] loss = {train_loss:.5f}, acc = {train_acc:.5f}")

# ---------- Validation ----------
# Make sure the model is in eval mode so that some modules like dropout are disabled and work normally.

model.eval()

# These are used to record information in validation.

valid_loss = []
valid_accs = []

# Iterate the validation set by batches.

# A batch consists of image data and corresponding labels.

imgs, labels = batch

# We don't need gradient in validation.

# Using torch.no_grad() accelerates the forward process.

logits = model(imgs.to(device))

# We can still compute the loss (but not the gradient).

loss = criterion(logits, labels.to(device))

# Compute the accuracy for current batch.

acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean()

# Record the loss and accuracy.

valid_loss.append(loss.item())
valid_accs.append(acc)

# The average loss and accuracy for entire validation set is the average of the recorded values.

valid_loss = sum(valid_loss) / len(valid_loss)
valid_acc = sum(valid_accs) / len(valid_accs)

# Print the information.

print(f"[ Valid | {epoch + 1:03d}/{n_epochs:03d} ] loss = {valid_loss:.5f}, acc = {valid_acc:.5f}")



## tensorflow基础

https://www.tensorflow.org/guide/keras/sequential_model?hl=zh-cn

### keras

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers


#### Sequential

Sequential模型适用于层的普通堆栈，其中每层正好有一个输入张量和一个输出张量。

Define Sequential model with 3 layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
Call model on a test input
x = tf.ones((3, 3))
y = model(x)


[En]

The sequential model is not appropriate in the following situations:

• 模型有多个输入或输出
[En]

the model has multiple inputs or outputs*

• 任何层都有多个输入或输出
[En]

any layer has multiple inputs or outputs*

• 您需要执行图层共享
• 您需要非线性拓扑(例如剩余连接、多分支模型)
[En]

you need nonlinear topologies (e.g. residual connections, multi-branch models)*

model = keras.Sequential()
model.add(keras.Input(shape=(250, 250, 3)))  # 250x250 RGB images

Can you guess what the current output shape is at this point? Probably not.

Let's just print it:
model.summary()

The answer was: (40, 40, 32), so we can keep downsampling...

And now?

model.summary()

Now that we have 4x4 feature maps, time to apply global max pooling.

Finally, we add a classification layer.



### Eager Execution

TensorFlow 的 Eager Execution 是一种命令式编程环境，可立即评估运算，无需构建计算图：运算会返回具体的值，而非构建供稍后运行的计算图。这样能使您轻松入门 TensorFlow 并调试模型，同时也减少了样板代码。要跟随本指南进行学习，请在交互式 python 解释器中运行以下代码示例。

Eager Execution 是用于研究和实验的灵活机器学习平台，具备以下特性：

• 直观的界面 – 自然地组织代码结构并使用 Python 数据结构。快速迭代小模型和小数据。
• 更方便的调试功能 -直接调用运算以检查正在运行的模型并测试更改。使用标准 Python 调试工具立即报告错误。
• 自然的控制流 – 使用 Python而非计算图控制流，简化了动态模型的规范。

Eager Execution 支持大部分 TensorFlow 运算和 GPU 加速。

import os

import tensorflow as tf

import cProfile


#### Eager 训练

w = tf.Variable([[1.0]])
loss = w * w

print(grad)  # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)


Fetch and format the mnist data

dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)

Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])


Original: https://blog.csdn.net/Algernon98/article/details/123993037
Author: 仿生程序员会梦见电子羊吗
Title: 【电子羊的奇妙冒险】初试深度学习（2）

(0)

### 大家都在看

• #### OpenCV实战之人脸美颜美型算法

人脸美颜美型是十分常见的图像处理功能，应用于手机拍照、视频直播、视频会议等图像视频流处理领域。如下图所示是一款拍照软件中针对人脸美颜美型功能的具体介绍。 人脸美颜美型是一个综合性较…

人工智能 2023年6月29日
0155
• #### Matlab图像绘制修饰

图形窗口相关 ✏打开不同的图形窗口命令figure；figure（1）；figure（2）；…；figure（n）：用来打开不同的图形窗口，以便绘制不同的图形。 ✏图形…

人工智能 2023年6月22日
0173
• #### AI人工智能知识图谱node2vec论文解读，人工智能知识图谱图计算

论文名称：node2vec: Scalable Feature Learning for Networksnode2vec的思想同DeepWalk一样：生成随机游走，对随机游走采样…

人工智能 2023年6月4日
0176
• #### python调用win32api 拉起wps问题排查解决方案

系列文章 【毕业设计】基于mqtt+vue+Thinkphp实现校园云打印小程序（暂未完成更新） 文章目录 系列文章 前言 问题排查 * 第一个问题，pywintypes.com_…

人工智能 2023年6月28日
0178
• #### python 使用前馈神经网络处理IrIs数据集（BP）

本文章包含以下内容： 数据: lris数据集; 模型: 前馈神经网络; 激活函数: Logistic 损失函数: 交叉嫡损失; 优化器: 梯度下降法; 评价指标 :准确率。 输出层…

人工智能 2023年7月28日
0125
• #### Collaborativ

详细解决 Collaborative Filtering 问题 Collaborative Filtering (协同过滤) 是一种常见的推荐系统算法。它利用用户历史行为数据（如用…

人工智能 2024年1月2日
0125
• #### 解决调用torch_geometric报错No module named ‘torch_sparse‘等问题，以及torch_sparse torch_scatter等的安装问题

出现的问题：torch_geometric报错 会出现为torch_sparse torch_scatter等的问题 最近又开始搞图神经网络方面的东西，要用到 torch_geom…

人工智能 2023年7月13日
0140
• #### 【钉钉杯大学生大数据挑战赛】初赛B 航班数据分析与预测 Python代码实现Baseline

目录 题目 思路分析 1 训练集预处理 * 1.1 读取后的航班动态展示 1.2 时间信息预处理 1.3 前序航班的延误时间&到达与起飞间隔 1.4 天气 1.5 将天气匹…

人工智能 2023年7月15日
0178
• #### 14、JAVA入门——方法和构造方法

目录 1、方法重载 2、构造方法 3、this关键字 4、成员变量和局部变量 1、方法重载 （1）方法重载的定义 方法重载是指在一个类中定义多个同名的方法，但要求每个方法具有不同的…

人工智能 2023年5月30日
0158
• #### 知识图谱入门知识（二）事件抽取（EE）详细介绍

很多事件抽取的方法将事件抽取分为两个阶段：EAE（event argument extraction） 和 ED（event detection）后者会根据触发词来确定事件的发生，…

人工智能 2023年6月1日
0225
• #### 【目标检测】36、OTA: Optimal Transport Assignment for Object Detection

文章目录 * – 一、背景 – 二、方法 – + 2.1 Optimal Transport + 2.2 OT for label assign…

人工智能 2023年7月28日
0121

人工智能 2023年7月27日
0175
• #### 深度学习（PyTorch）——生成对抗网络（GAN)

一、GAN的基本概念 GAN是由Ian Goodfellow于2014年首次提出，学习GAN的初衷，即生成不存在于真实世界的数据。类似于AI具有创造力和想象力。 GAN有两大护法G…

人工智能 2023年7月21日
0144
• #### 基于Python的图像超分辨率（Image Super Resolution）

人工智能 2023年5月23日
0138
• #### 基于SOM算法的Iris数据分类

基于SOM算法的Iris数据分类 自组织特征映射神经网络SOM(Self-Organizing Feature Map)是一种无监督学习算法，不同于一般神经网络基于损失函数的优化训…

人工智能 2023年7月2日
0138
• #### 【基础教程】BP神经网络

1 BP神经网络的结构组成BP神经网络结构组成：2输入1输出，5个隐含层的，也称为2-5-1网络结构；Neural Network：神经网络Input：输入Hidden Layer…

人工智能 2023年6月16日
0181