DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

BatchNorm2d in PyTorch

Buy Me a Coffee

*Memos:

BatchNorm2d() can get the 4D tensor of the zero or more elements computed by 2D Batch Normalization from the 4D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is num_features(Required-Type:int). *It must be 1 <= x.
  • The 2nd argument for initialization is eps(Optional-Default:1e-05-Type:float).
  • The 3rd argument for initialization is momentum(Optional-Default:0.1-Type:float).
  • The 4th argument for initialization is affine(Optional-Default:True-Type:bool).
  • The 5th argument for initialization is track_running_stats(Optional-Default:True-Type:bool).
  • The 6th argument for initialization is device(Optional-Default:None-Type:str, int or device()). *Memos:
  • The 7th argument for initialization is dtype(Optional-Default:None-Type:dtype). *Memos:
  • The 1st argument is input(Required-Type:tensor of float): *Memos:
    • It must be the 4D tensor of zero or more elements.
    • The number of the elements of the 2nd shallowest dimension must be same as num_features.
    • Its device and dtype must be same as BatchNorm2d()'s.
    • The tensor's requires_grad which is False by default is set to True by BatchNorm2d().
  • batchnorm2d1.device and batchnorm2d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[[[8., -3., 0., 1., 5., -2.]]]])

tensor1.requires_grad
# False

batchnorm2d1 = nn.BatchNorm2d(num_features=1)
tensor2 = batchnorm2d1(input=tensor1)
tensor2
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

tensor2.requires_grad
# True

batchnorm2d1
# BatchNorm2d(1, eps=1e-05, momentum=0.1, affine=True,
#             track_running_stats=True)

batchnorm2d1.num_features
# 1

batchnorm2d1.eps
# 1e-05

batchnorm2d1.momentum
# 0.1

batchnorm2d1.affine
# True

batchnorm2d1.track_running_stats
# True

batchnorm2d2 = nn.BatchNorm2d(num_features=1)
batchnorm2d2(input=tensor2)
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

batchnorm2d = nn.BatchNorm2d(num_features=1, eps=1e-05, momentum=0.1, 
                             affine=True, track_running_stats=True, 
                             device=None, dtype=None)
batchnorm2d(input=tensor1)
# tensor([[[[1.6830, -1.1651, -0.3884, -0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8., -3., 0.],
                            [1., 5., -2.]]]])
batchnorm2d = nn.BatchNorm2d(num_features=1)
batchnorm2d(input=my_tensor)
# tensor([[[[1.6830, -1.1651, -0.3884],
#           [-0.1295, 0.9062, -0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.], [1.], [5.], [-2.]]]])

batchnorm2d = nn.BatchNorm2d(num_features=1)
batchnorm2d(input=my_tensor)
# tensor([[[[1.6830], [-1.1651], [-0.3884], [-0.1295], [0.9062], [-0.9062]]]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.]],
                           [[1.], [5.], [-2.]]]])
batchnorm2d = nn.BatchNorm2d(num_features=2)
batchnorm2d(input=my_tensor)
# tensor([[[[1.3641], [-1.0051], [-0.3590]],
#          [[-0.1162], [1.2787], [-1.1625]]]],
#        grad_fn=<NativeBatchNormBackward0>)
Enter fullscreen mode Exit fullscreen mode

Heroku

Simplify your DevOps and maximize your time.

Since 2007, Heroku has been the go-to platform for developers as it monitors uptime, performance, and infrastructure concerns, allowing you to focus on writing code.

Learn More

Top comments (0)

Billboard image

Create up to 10 Postgres Databases on Neon's free plan.

If you're starting a new project, Neon has got your databases covered. No credit cards. No trials. No getting in your way.

Try Neon for Free →

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay