DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

float_power in PyTorch

Buy Me a Coffee

*Memos:

float_power() can get the 0D or more D tensor of the zero or more powers of float or complex from two of the 0D or more D tensors of zero or more elements or the 0D or more D tensor of zero or more elements and a scalar as shown below:

*Memos:

  • float_power() can be used with torch or a tensor.
  • The 1st argument(input) with torch(Type:tensor or scalar of int, float, complex or bool) or using a tensor(Type:tensor of int, float, complex or bool)(Required). *torch must use a scalar without input=.
  • The 2nd argument with torch or the 1st argument with a tensor is exponent(Required-Type:tensor or scalar of int, float, complex or bool).
  • There is out argument with torch(Optional-Default:None-Type:tensor): *Memos:
    • out= must be used.
    • My post explains out argument.
  • The combination of a scalar(input or a tensor) and a scalar(exponent) cannot be used.
import torch

tensor1 = torch.tensor(-3.)
tensor2 = torch.tensor([-4., -3., -2., -1., 0., 1., 2., 3.])

torch.float_power(input=tensor1, exponent=tensor2)
tensor1.float_power(exponent=tensor2)
# tensor([1.2346e-02, -3.7037e-02, 1.1111e-01, -3.3333e-01,  
#         1.0000e+00, -3.0000e+00, 9.0000e+00, -2.7000e+01], 
#        dtype=torch.float64)

torch.float_power(-3., exponent=tensor2)
# tensor([1.2346e-02, -3.7037e-02, 1.1111e-01, -3.3333e-01,
#         1.0000e+00, -3.0000e+00, 9.0000e+00, -2.7000e+01],
#        dtype=torch.float64)

torch.float_power(input=tensor1, exponent=-3.)
# tensor(-0.0370, dtype=torch.float64)

tensor1 = torch.tensor([-3., 1., -2., 3., 5., -5., 0., -4.])
tensor2 = torch.tensor([-4., -3., -2., -1., 0., 1., 2., 3.])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([1.2346e-02, 1.0000e+00, 2.5000e-01, 3.3333e-01,
#         1.0000e+00, -5.0000e+00, 0.0000e+00, -6.4000e+01],
#        dtype=torch.float64)

torch.float_power(-3., exponent=tensor2)
# tensor([1.2346e-02, -3.7037e-02, 1.1111e-01, -3.3333e-01,  
#         1.0000e+00, -3.0000e+00, 9.0000e+00, -2.7000e+01],
#        dtype=torch.float64)

torch.float_power(input=tensor1, exponent=-3.)
# tensor([-0.0370, 1.0000, -0.1250, 0.0370,
#         0.0080, -0.0080, inf, -0.0156], dtype=torch.float64)

tensor1 = torch.tensor([[-3., 1., -2., 3.], [5., -5., 0., -4.]])
tensor2 = torch.tensor([0., 1., 2., 3.])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([[1., 1., 4., 27.], [1., -5., 0., -64.]],
#        dtype=torch.float64)

torch.float_power(-3., exponent=tensor2)
# tensor([1., -3., 9., -27.], dtype=torch.float64)

torch.float_power(input=tensor1, exponent=-3.)
# tensor([[-0.0370, 1.0000, -0.1250, 0.0370],
#         [0.0080, -0.0080, inf, -0.0156]], 
#        dtype=torch.float64)

tensor1 = torch.tensor([[[-3., 1.], [-2., 3.]],
                        [[5., -5.], [0., -4.]]])
tensor2 = torch.tensor([2., 3.])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([[[9., 1.], [4., 27.]],
#         [[25., -125.], [0., -64.]]], dtype=torch.float64)

torch.float_power(-3., exponent=tensor2)
# tensor([9., -27.], dtype=torch.float64)

torch.float_power(input=tensor1, exponent=-3.)
# tensor([[[-0.0370, 1.0000], [-0.1250, 0.0370]],
#         [[0.0080, -0.0080], [inf, -0.0156]]], 
#        dtype=torch.float64)

tensor1 = torch.tensor([[[-3, 1], [-2, 3]],
                        [[5, -5], [0, -4]]])
tensor2 = torch.tensor([2, 3])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([[[9., 1.], [4., 27.]],
#         [[25., -125.], [0., -64.]]], dtype=torch.float64)

torch.float_power(-3, exponent=tensor2)
# tensor([9., -27.], dtype=torch.float64)

torch.float_power(input=tensor1, exponent=-3)
# tensor([[[-0.0370, 1.0000], [-0.1250, 0.0370]],
#         [[0.0080, -0.0080], [inf, -0.0156]]], 
#        dtype=torch.float64)

tensor1 = torch.tensor([[[-3.+0.j, 1.+0.j], [-2.+0.j, 3.+0.j]],
                        [[5.+0.j, -5.+0.j], [0.+0.j, -4.+0.j]]])
tensor2 = torch.tensor([2.+0.j, 3.+0.j])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([[[9.0000-2.2044e-15j, 1.0000+0.0000e+00j],
#          [4.0000-9.7972e-16j, 27.0000+0.0000e+00j]],
#         [[25.0000+0.0000e+00j, -125.0000+4.5924e-14j],
#          [0.0000-0.0000e+00j, -64.0000+2.3513e-14j]]],
#        dtype=torch.complex128)

torch.float_power(-3.+0.j, exponent=tensor2)
# tensor([9.0000-2.2044e-15j, -27.0000+9.9196e-15j],
#        dtype=torch.complex128)

torch.float_power(input=tensor1, exponent=-3.+0.j)
# tensor([[[-0.0370-1.3607e-17j, 1.0000+0.0000e+00j],
#          [-0.1250-4.5924e-17j, 0.0370+0.0000e+00j]],
#         [[0.0080+0.0000e+00j, -0.0080-2.9392e-18j],
#          [inf+nanj, -0.0156-5.7405e-18j]]], 
#        dtype=torch.complex128)

tensor1 = torch.tensor([[[True, False], [True, False]],
                         [[False, True], [False, True]]])
tensor2 = torch.tensor([True, False])

torch.float_power(input=tensor1, exponent=tensor2)
# tensor([[[1., 1.], [1., 1.]],
#         [[0., 1.], [0., 1.]]], dtype=torch.float64)

torch.float_power(True, exponent=tensor2)
# tensor([1., 1.], dtype=torch.float64)

torch.float_power(input=tensor1, exponent=True)
# tensor([[[1., 0.], [1., 0.]],
#         [[0., 1.], [0., 1.]]], dtype=torch.float64)
Enter fullscreen mode Exit fullscreen mode

AWS Security LIVE!

Tune in for AWS Security LIVE!

Join AWS Security LIVE! for expert insights and actionable tips to protect your organization and keep security teams prepared.

Learn More

Top comments (0)

AWS Security LIVE!

Tune in for AWS Security LIVE!

Join AWS Security LIVE! for expert insights and actionable tips to protect your organization and keep security teams prepared.

Learn More

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay