「解析」正则化 DropPath

DropPath 类似于Dropout,不同的是 Drop将深度学习模型中的多分支结构随机 “失效”
而Dropout 是对神经元随机 “失效”

1、DropPath在网络中的应用

假设在前向传播中有如下的代码:

x = x + self.drop_path( self.conv(x) )

那么在drop_path分支中,每个batch有drop_prob的概率样本在 self.conv(x) 不会 “执行”,会以0直接传递。

若x为输入的张量,其通道为[B,C,H,W],那么drop_path的含义为在一个Batch_size中,随机有drop_prob的样本,不经过主干,而直接由分支进行恒等映射。

⚠️注意:Drop Path不能直接这样使用:
x = self.drop_path(x)

2、DropPath实现

def drop_path(x, drop_prob: float = 0., training: bool = False, scale_by_keep: bool = True):
    """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

    This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
    the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...

    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
    changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
    'survival rate' as the argument.

"""
    if drop_prob == 0. or not training:
        return x
    keep_prob = 1 - drop_prob
    shape = (x.shape[0],) + (1,) * (x.ndim - 1)
    random_tensor = x.new_empty(shape).bernoulli_(keep_prob)
    if keep_prob > 0.0 and scale_by_keep:
        random_tensor.div_(keep_prob)
    return x * random_tensor

class DropPath(nn.Module):
    """Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).

"""
    def __init__(self, drop_prob=None, scale_by_keep=True):
        super(DropPath, self).__init__()
        self.drop_prob = drop_prob
        self.scale_by_keep = scale_by_keep

    def forward(self, x):
        return drop_path(x, self.drop_prob, self.training, self.scale_by_keep)

torch.bernoulli(input, *, generator=None, out=None)

>>> a = torch.empty(3, 3).uniform_(0, 1)
>>> a
tensor([[ 0.1737,  0.0950,  0.3609],
        [ 0.7148,  0.0289,  0.2676],
        [ 0.9456,  0.8937,  0.7202]])
>>> torch.bernoulli(a)
tensor([[ 1.,  0.,  0.],
        [ 0.,  0.,  0.],
        [ 1.,  1.,  1.]])

从伯努利分布中提取二进制随机数(0或1)。
输入张量应该是一个包含用于绘制二进制随机数的概率的张量。因此,输入中的所有值必须在以下范围内:0 ≤ i n p u t i ≤ 1 0≤input_i≤1 0 ≤i n p u t i ​≤1

o u t i ∼ B e r n o u l l i ( p = i n p u t i ) \mathrm{out_i ∼Bernoulli}( p= \mathrm{input_i} )o u t i ​∼B e r n o u l l i (p =i n p u t i ​)

The returned out tensor only has values 0 or 1 and is of the same shape as input.

out can have integral dtype, but input must have floating point dtype

用均匀分布中提取的值填充输入张量 U ( a , b ) U(a,b)U (a ,b )

torch.nn.init.uniform_(tensor, a=0.0, b=1.0)

>>> a = torch.empty(3, 3)
>>> a
tensor([[0.0000e+00, 1.5846e+29, 0.0000e+00],
        [1.5846e+29, 9.8091e-45, 0.0000e+00],
        [0.0000e+00, 0.0000e+00, 0.0000e+00]])

>>> a.uniform_(0,1)
tensor([[0.0876, 0.5072, 0.4613],
        [0.7696, 0.4485, 0.1128],
        [0.2512, 0.8060, 0.6595]])

>>> a.bernoulli_()
tensor([[1., 1., 1.],
        [1., 0., 0.],
        [1., 0., 0.]])

将输入的每个元素除以另一个元素的对应元素。

torch.div(input, other, *, rounding_mode=None, out=None)

>>> x = torch.tensor([ 0.3810,  1.2774, -0.2972, -0.3719,  0.4637])
>>> torch.div(x, 0.5)
tensor([ 0.7620,  2.5548, -0.5944, -0.7438,  0.9274])

>>> a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917],
...                   [ 0.1815, -1.0111,  0.9805, -1.5923],
...                   [ 0.1062,  1.4581,  0.7759, -1.2344],
...                   [-0.1830, -0.0313,  1.1908, -1.4757]])
>>> b = torch.tensor([ 0.8032,  0.2930, -0.8113, -0.2308])
>>> torch.div(a, b)
tensor([[-0.4620, -6.6051,  0.5676,  1.2639],
        [ 0.2260, -3.4509, -1.2086,  6.8990],
        [ 0.1322,  4.9764, -0.9564,  5.3484],
        [-0.2278, -0.1068, -1.4678,  6.3938]])

o u t i = i n p u t i o t h e r i out_i = \frac{input_i}{other_i}o u t i ​=o t h e r i ​i n p u t i ​​

3、参考

Original: https://blog.csdn.net/ViatorSun/article/details/122947859
Author: ViatorSun
Title: 「解析」正则化 DropPath

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/707888/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球