masked_fill_
(mask, value)
Fills elements ofself
tensor withvalue
wheremask
is True. The shape ofmask
must be broadcastable with the shape of the underlying tensor.
Parameters
- mask (BoolTensor) – the boolean mask
- value (float) – the value to fill in with
对于官网解释的这个必须是broadcast的,我并不太理解,记录一下心得,防止以后忘记。
>>> import torch
>>> a=torch.tensor([[[5,5,5,5], [6,6,6,6], [7,7,7,7]], [[1,1,1,1],[2,2,2,2],[3,3,3,3]]])
>>> print(a)
tensor([[[5, 5, 5, 5],
[6, 6, 6, 6],
[7, 7, 7, 7]],
[[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]]])
>>> print(a.size())
torch.Size([2, 3, 4])
>>> print("#############################################3")
#############################################3
>>> mask = torch.ByteTensor([[[1],[1],[0]],[[0],[1],[1]]])
>>> print(mask.size())
torch.Size([2, 3, 1])
>>> b = a.masked_fill(mask, value=torch.tensor(-1e9))
__main__:1: UserWarning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at ..\aten\src\ATen\native\TensorAdvancedIndexing.cpp:570.)
>>> print(b)
tensor([[[-1000000000, -1000000000, -1000000000, -1000000000],
[-1000000000, -1000000000, -1000000000, -1000000000],
[ 7, 7, 7, 7]],
[[ 1, 1, 1, 1],
[-1000000000, -1000000000, -1000000000, -1000000000],
[-1000000000, -1000000000, -1000000000, -1000000000]]])
可以看到a和mask的shape对应分别是 2 3 4 对应 2 3 1 ,可以看到mask为中的第一个1,使得a的第一行全部被mask掉了,那么我把mask的shape改成2 3 4 ,是不是可以指定位置mask掉呢
>>> mask = torch.ByteTensor([[[1,1,0,0],[1,0,0,0],[0,0,0,0]],[[0,0,0,0],[1,1,1,1],[1,1,1,1]]])
>>> b = a.masked_fill(mask, value=torch.tensor(-1e9))
>>> b
tensor([[[-1000000000, -1000000000, 5, 5],
[-1000000000, 6, 6, 6],
[ 7, 7, 7, 7]],
[[ 1, 1, 1, 1],
[-1000000000, -1000000000, -1000000000, -1000000000],
[-1000000000, -1000000000, -1000000000, -1000000000]]])
的确可以,好的,如果shape相同,那就是对应位置被mask掉,
那么现在,我把mask的shape改成1,3,4 a保持为 2 ,3 ,4 会不会对于a的最外层的两个维度进行一样的mask呢?
>>> mask = torch.ByteTensor([[[1,1,0,0],[1,0,0,0],[0,0,0,0]]])
>>> b = a.masked_fill(mask, value=torch.tensor(-1e9))
>>> b
tensor([[[-1000000000, -1000000000, 5, 5],
[-1000000000, 6, 6, 6],
[ 7, 7, 7, 7]],
[[-1000000000, -1000000000, 1, 1],
[-1000000000, 2, 2, 2],
[ 3, 3, 3, 3]]])
>>> a.shape()
Traceback (most recent call last):
File "", line 1, in
TypeError: 'torch.Size' object is not callable
>>> a.shape
torch.Size([2, 3, 4])
>>> mask.shape
torch.Size([1, 3, 4])
的确是这样的,最外层的两个维度进行了相同的mask
那么再改一改,mask改成1,1,4,这样是不是行a都会被相同的mask掉
>>> mask = torch.ByteTensor([[[1,1,0,0]]])
>>> b = a.masked_fill(mask, value=torch.tensor(-1e9))
>>> b
tensor([[[-1000000000, -1000000000, 5, 5],
[-1000000000, -1000000000, 6, 6],
[-1000000000, -1000000000, 7, 7]],
[[-1000000000, -1000000000, 1, 1],
[-1000000000, -1000000000, 2, 2],
[-1000000000, -1000000000, 3, 3]]])
是这样的。
所以,当mask某一维上为1 的时候,在待处理数据的该维上,都进行的一样的mask
可能表述不清,根据例子大家自己体会一下用法即可
Original: https://blog.csdn.net/weixin_41684423/article/details/117339499
Author: 宋老板的笔记
Title: Pytorch masked_fill 函数理解应用
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/762545/
转载文章受原作者版权保护。转载请注明原作者出处!