DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

1

Linear in PyTorch

Buy Me a Coffee

*Memos:

Linear() can get the 1D or more D tensor of the zero or more elements computed by Affine transformation from the 1D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is in_features(Required-Type:float or complex). *It must be 0 <= x.
  • The 2nd argument for initialization is out_features(Required-Default:False-Type:float): *Memos:
    • It must be 0 <= x.
    • 0 is possible but warning occurs.
  • The 3rd argument for initialization is bias(Optional-Default:True-Type:bool). *My post explains bias argument.
  • The 4th argument for initialization is device(Optional-Default:None-Type:str, int or device()): *Memos:
  • The 5th argument for initialization is dtype(Optional-Default:None-Type:dtype): *Memos:
  • The 1st argument is input(Required-Type:tensor of float): *Memos:
    • It must be the 1D or more D tensor of zero or more elements.
    • The number of the elements of the deepest dimension must be same as in_features.
    • Its device and dtype must be same as Linear()'s.
    • complex must be set to dtype of Linear() to use a complex tensor.
    • The tensor's requires_grad which is False by default is set to True by Linear().
  • linear1.device and linear1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])

tensor1.requires_grad
# False

torch.manual_seed(42)

linear1 = nn.Linear(in_features=6, out_features=3)
tensor2 = linear1(input=tensor1)
tensor2
# tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>)

tensor2.requires_grad
# True

linear1
# Linear(in_features=6, out_features=4, bias=True)

linear1.in_features
# 6

linear1.out_features
# 3

linear1.bias
# Parameter containing:
# tensor([-0.1906, 0.1041, -0.1881], requires_grad=True)

linear1.weight
# Parameter containing:
# tensor([[0.3121, 0.3388, -0.0956, 0.3750, -0.0894, 0.0824],
#         [-0.1988, 0.2398, 0.3599, -0.2995, 0.3548, 0.0764],
#         [0.3016, 0.0553, 0.1969, -0.0576, 0.3147, 0.0603]],
#        requires_grad=True)

torch.manual_seed(42)

linear2 = nn.Linear(in_features=3, out_features=3)
linear2(input=tensor2)
# tensor([-0.8493, 1.5744, 1.2707], grad_fn=<ViewBackward0>)

torch.manual_seed(42)

linear = nn.Linear(in_features=6, out_features=3, bias=True,
                   device=None, dtype=None)
linear(input=tensor1)
# tensor([1.0529, -0.8833, 3.4542], grad_fn=<ViewBackward0>)

my_tensor = torch.tensor([[8., -3., 0.],
                          [1., 5., -2.]])
torch.manual_seed(42)

linear = nn.Linear(in_features=3, out_features=3)
linear(input=my_tensor)
# tensor([[1.6701, 5.1242, -3.1578],
#         [2.6844, 0.1667, 0.5044]], grad_fn=<AddmmBackward0>)

my_tensor = torch.tensor([[[8.], [-3.], [0.]],
                          [[1.], [5.], [-2.]]])
torch.manual_seed(42)

linear = nn.Linear(in_features=1, out_features=3)
linear(input=my_tensor)
# tensor([[[7.0349, 6.4210, -1.6724],
#          [-1.3750, -2.7091, 0.9046],
#          [0.9186, -0.2191, 0.2018]],
#         [[1.6831, 0.6109, -0.0325],
#          [4.7413, 3.9309, -0.9696],
#          [-0.6105, -1.8791, 0.6703]]], grad_fn=<ViewBackward0>)

my_tensor = torch.tensor([[[8.+0.j], [-3.+0.j], [0.+0.j]],
                          [[1.+0.j], [5.+0.j], [-2.+0.j]]])
torch.manual_seed(42)

linear = nn.Linear(in_features=1, out_features=3, dtype=torch.complex64)
linear(input=my_tensor)
# tensor([[[5.6295+7.2273j, -0.9926+6.6153j, -0.8836+1.8015j],
#          [-2.7805-1.9027j, 1.5844-3.4895j, 1.5265-0.4182j],
#          [-0.4869+0.5873j, 0.8815-0.7336j, 0.8692+0.1872j]],
#         [[0.2777+1.4173j, 0.6473+0.1850j, 0.6501+0.3889j],
#          [3.3358+4.7373j, -0.2898+3.8594j, -0.2263+1.1961j],
#          [-2.0159-1.0727j, 1.3501-2.5709j, 1.3074-0.2164j]]],
#        grad_fn=<ViewBackward0>)
Enter fullscreen mode Exit fullscreen mode

Sentry image

Hands-on debugging session: instrument, monitor, and fix

Join Lazar for a hands-on session where you’ll build it, break it, debug it, and fix it. You’ll set up Sentry, track errors, use Session Replay and Tracing, and leverage some good ol’ AI to find and fix issues fast.

RSVP here →

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay