*Memos:
- [Warning]-normal() is really tricky.
- You can use manual_seed() with normal(). *My post explains
manual_seed(). - My post explains rand() and rand_like().
- My post explains randn() and randn_like().
- My post explains randint() and randperm().
normal() can create the 0D or more D tensor of zero or more random floating-point numbers or complex numbers from normal distribution as shown below:
*Memos:
-
normal()can be used with torch but not with a tensor. - The 1st argument with
torchismean(Required-Type:floatorcomplexortensoroffloatorcomplex): *Memos:- Setting
meanwithoutstdandsizeistensoroffloatorcomplex. - Setting
meanandstdwithoutsizeisfloatortensoroffloatorcomplex. - Setting
mean,stdandsizeisfloatortensoroffloat. *The 0D tensor offloatalso works.
- Setting
- The 2nd argument with
torchisstd(Optional-Type:floatortensoroffloat): *Memos:- It is standard deviation.
- It must be greater than or equal to 0.
- Setting
stdwithoutsizeisfloatortensoroffloat. - Setting
stdwithsizeisfloatortensoroffloat. *The 0D tensor offloatalso works.
- The 3rd argument with
torchissize(Optional-Type:tupleofint,listofintor size()): *Memos:- It must be used with
std. - It must not be negative.
- It must be used with
- There is
dtypeargument withtorch(Optional-Default:None-Type:dtype): *Memos:- If it's
None, it's inferred frommeanorstd, then for floating-point numbers, get_default_dtype() is used. *My post explainsget_default_dtype()and set_default_dtype(). -
dtype=must be used. -
My post explains
dtypeargument.
- If it's
- There is
deviceargument withtorch(Optional-Default:None-Type:str,intor device()): *Memos:- If it's
None, get_default_device() is used. *My post explainsget_default_device()and set_default_device(). -
device=must be used. -
My post explains
deviceargument.
- If it's
- There is
requires_gradargument withtorch(Optional-Default:False-Type:bool): *Memos:-
requires_grad=must be used. -
My post explains
requires_gradargument.
-
- There is
outargument withtorch(Optional-Default:None-Type:tensor): *Memos:-
out=must be used. -
My post explains
outargument.
-
import torch
torch.normal(mean=torch.tensor([1., 2., 3.]))
# tensor([1.2713, 0.7271, 3.5027])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]))
# tensor([1.1918-0.9001j, 2.3555+0.2956j, 2.5479-0.4672j])
torch.normal(mean=torch.tensor([1., 2., 3.]),
std=torch.tensor([4., 5., 6.]))
# tensor([2.0851, -4.3646, 6.0162])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]),
std=torch.tensor([4., 5., 6.]))
# tensor([1.7673-3.6004j, 3.7773+1.4781j, 0.2872-2.8034j])
torch.normal(mean=torch.tensor([1., 2., 3.]), std=4.)
# tensor([2.0851, -3.0917, 5.0108])
torch.normal(mean=torch.tensor([1.+0.j, 2.+0.j, 3.+0.j]), std=4.)
# tensor([1.7673-3.6004j, 3.4218+1.1825j, 1.1914-1.8689j])
torch.normal(mean=1., std=torch.tensor([4., 5., 6.]))
# tensor([2.0851, -5.3646, 4.0162])
torch.normal(mean=1., std=4., size=())
torch.normal(mean=1., std=4., size=torch.tensor(8).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=())
# tensor(2.0851)
torch.normal(mean=1., std=4., size=(3,))
torch.normal(mean=1., std=4., size=torch.tensor([8, 3, 6]).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3,))
# tensor([2.0851, -4.0917, 3.0108])
torch.normal(mean=1., std=4., size=(3, 2))
torch.normal(mean=1., std=4.,
size=torch.tensor([[8, 3], [6, 0], [2, 9]]).size())
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3, 2))
# tensor([[2.0851, -4.0917],
# [3.0108, 2.6723],
# [-1.5577, -1.6431]])
torch.normal(mean=1., std=4., size=(3, 2, 4))
torch.normal(mean=torch.tensor(1.), std=torch.tensor(4.), size=(3, 2, 4))
# tensor([[[-3.7568, 6.5729, 9.4236, -0.4183],
# [2.4840, 5.3827, 9.5657, 1.5267]],
# [[8.0575, -0.5000, -0.3416, 5.3502],
# [-4.3835, 1.6974, 2.6226, -1.9671]],
# [[1.1422, 1.7790, 4.5886, -0.3273],
# [2.8941, -3.3046, 1.1336, 2.8792]]])
Top comments (0)