*Memos:
- My post explains L1 Loss(MAE), L2 Loss(MSE), Huber Loss, BCE and Cross Entropy Loss.
- My post explains L1Loss() and MSELoss().
- My post explains HuberLoss().
- My post explains CrossEntropyLoss().
BCELoss() can get the 0D or more D tensor of the zero or more values(float
) computed by BCE Loss from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
weight
(Optional-Default:None
-Type:tensor
ofint
,float
orbool
). If not given, it's1
. - There is
reduction
argument for initialization(Optional-Default:'mean'
-Type:str
). *'none'
,'mean'
or'sum'
can be selected. - There are
size_average
andreduce
argument for initialization but they are deprecated. - The 1st argument is
input
(Required-Type:tensor
offloat
). *It must be0<=x<=1
. - The 2nd argument is
target
(Required-Type:tensor
offloat
). *It must be0<=y<=1
. -
input
andtarget
must be the same size otherwise there is error. - The empty 1D or more D
input
andtarget
tensor withreduction='mean'
returnnan
. - The empty 1D or more D
input
andtarget
tensor withreduction='sum'
return0.
.
import torch
from torch import nn
tensor1 = torch.tensor([0.4, 0.8, 0.6, 0.3, 0.0, 0.5])
tensor2 = torch.tensor([0.2, 0.9, 0.4, 0.1, 0.8, 0.5])
# -w(y*logx+(1-y)*log(1-x)))
# -1(0.2*log0.4+(1-0.2)*log(1-0.4))
# ↓↓↓↓↓↓
# 0.5919+0.3618+0.7541+0.4414+80.0+0.6931 = 82.8423
# 82.8423 / 6 = 13.8071
bceloss = nn.BCELoss()
bceloss(input=tensor1, target=tensor2)
# tensor(7.2500)
bceloss
# BCELoss()
print(bceloss.weight)
# None
bceloss.reduction
# 'mean'
bceloss = nn.BCELoss(weight=None, reduction='mean')
bceloss(input=tensor1, target=tensor2)
# tensor(13.8071)
bceloss = nn.BCELoss(reduction='sum')
bceloss(input=tensor1, target=tensor2)
# tensor(82.8423)
bceloss = nn.BCELoss(reduction='none')
bceloss(input=tensor1, target=tensor2)
# tensor([0.5919, 0.3618, 0.7541, 0.4414, 80.0000, 0.6931])
bceloss = nn.BCELoss(weight=torch.tensor([0., 1., 2., 3., 4., 5.]))
bceloss(input=tensor1, target=tensor2)
# tensor(54.4433)
bceloss = nn.BCELoss(weight=torch.tensor([0.]))
bceloss(input=tensor1, target=tensor2)
# tensor(0.)
bceloss = nn.BCELoss(weight=torch.tensor([0, 1, 2, 3, 4, 5]))
bceloss(input=tensor1, target=tensor2)
# tensor(54.4433)
bceloss = nn.BCELoss(weight=torch.tensor([0]))
bceloss(input=tensor1, target=tensor2)
# tensor(0.)
bceloss = nn.BCELoss(
weight=torch.tensor([True, False, True, False, True, False])
)
bceloss(input=tensor1, target=tensor2)
# tensor(13.5577)
bceloss = nn.BCELoss(weight=torch.tensor([False]))
bceloss(input=tensor1, target=tensor2)
# tensor(0.)
tensor1 = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
tensor2 = torch.tensor([[0.2, 0.9, 0.4], [0.1, 0.8, 0.5]])
bceloss = nn.BCELoss()
bceloss(input=tensor1, target=tensor2)
# tensor(13.8071)
tensor1 = torch.tensor([[[0.4], [0.8], [0.6]], [[0.3], [0.0], [0.5]]])
tensor2 = torch.tensor([[[0.2], [0.9], [0.4]], [[0.1], [0.8], [0.5]]])
bceloss = nn.BCELoss()
bceloss(input=tensor1, target=tensor2)
# tensor(13.8071)
tensor1 = torch.tensor([])
tensor2 = torch.tensor([])
bceloss = nn.BCELoss(reduction='mean')
bceloss(input=tensor1, target=tensor2)
# tensor(nan)
bceloss = nn.BCELoss(reduction='sum')
bceloss(input=tensor1, target=tensor2)
# tensor(0.)
Top comments (0)