DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on

BatchNorm2d() in PyTorch

Buy Me a Coffee

*Memos:

BatchNorm2d() can get the 4D tensor of the zero or more elements computed by 2D Batch Normalization from the 4D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is num_features(Required-Type:int). *It must be 1 <= x.
  • The 2nd argument for initialization is eps(Optional-Default:1e-05-Type:float).
  • The 3rd argument for initialization is momentum(Optional-Default:0.1-Type:float).
  • The 4th argument for initialization is affine(Optional-Default:True-Type:bool).
  • The 5th argument for initialization is track_running_stats(Optional-Default:True-Type:bool).
  • The 6th argument for initialization is device(Optional-Type:str, int or device()). *Memos:
  • The 7th argument for initialization is dtype(Optional-Type:int). *Memos:
  • The 1st argument is input(Required-Type:tensor of float).
  • The tensor's requires_grad which is False by default is set to True by BatchNorm2d().
  • Input tensor's device and dtype must be same as BatchNorm2d()'s device and dtype respectively.
  • batchnorm2d1.device and batchnorm2d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[[[8., -3., 0., 1., 5., -2.]]]])

tensor1.requires_grad
# False

batchnorm2d1 = nn.BatchNorm2d(num_features=1)
tensor2 = batchnorm2d1(input=tensor1)
tensor2
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

tensor2.requires_grad
# True

batchnorm2d1
# BatchNorm2d(1, eps=1e-05, momentum=0.1, affine=True,
#             track_running_stats=True)

batchnorm2d1.num_features
# 1

batchnorm2d1.eps
# 1e-05

batchnorm2d1.momentum
# 0.1

batchnorm2d1.affine
# True

batchnorm2d1.track_running_stats
# True

batchnorm2d2 = nn.BatchNorm2d(num_features=1)
batchnorm2d2(input=tensor2)
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

batchnorm2d = nn.BatchNorm2d(num_features=1, eps=1e-05, momentum=0.1, 
                             affine=True, track_running_stats=True, 
                             device=None, dtype=None)
batchnorm2d(input=tensor1)
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8., -3., 0.],
                            [1., 5., -2.]]]])
batchnorm2d = nn.BatchNorm2d(num_features=1)
batchnorm2d(input=my_tensor)
# tensor([[[[1.6830, -1.1651, -0.3884],
#           [-0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.], [1.], [5.], [-2.]]]])

batchnorm2d = nn.BatchNorm2d(num_features=1)
batchnorm2d(input=my_tensor)
# tensor([[[[1.6830], [-1.1651], [-0.3884], [-0.1295], [0.9062], [-0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.]],
                           [[1.], [5.], [-2.]]]])
batchnorm2d = nn.BatchNorm2d(num_features=2)
batchnorm2d(input=my_tensor)
# tensor([[[[1.3641], [-1.0051], [-0.3590]],
#          [[-0.1162], [1.2787], [-1.1625]]]],
#        grad_fn=<NativeBatchNormBackward0>)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)