<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bodhisatva Tiwari</title>
    <description>The latest articles on DEV Community by Bodhisatva Tiwari (@bodhitiwari).</description>
    <link>https://dev.to/bodhitiwari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bodhitiwari"/>
    <language>en</language>
    <item>
      <title>NumPy: An Engineer-Level Guide to Arrays, Math, Randomness, and Linear Algebra</title>
      <dc:creator>Bodhisatva Tiwari</dc:creator>
      <pubDate>Wed, 17 Dec 2025 17:18:57 +0000</pubDate>
      <link>https://dev.to/bodhitiwari/numpy-an-engineer-level-guide-to-arrays-math-randomness-and-linear-algebra-22o6</link>
      <guid>https://dev.to/bodhitiwari/numpy-an-engineer-level-guide-to-arrays-math-randomness-and-linear-algebra-22o6</guid>
      <description>&lt;p&gt;This article is not a reference dump.&lt;br&gt;
It explains what matters, why it exists, and where it is used in real workflows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating Arrays — The Foundation
Everything in NumPy starts with ndarray.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;np.array()&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.array([1, 2, 3, 4])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Converts Python sequences into contiguous, homogeneous memory blocks.&lt;/p&gt;

&lt;p&gt;Why it matters&lt;br&gt;
Enables vectorized operations&lt;br&gt;
Predictable performance&lt;br&gt;
Eliminates Python loop overhead&lt;/p&gt;

&lt;p&gt;np.zeros(shape)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.zeros((3, 4))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates an array filled with zeros.&lt;/p&gt;

&lt;p&gt;Used for&lt;br&gt;
Preallocation in performance-critical loops&lt;br&gt;
Placeholder tensors in ML pipelines&lt;br&gt;
Numerical solvers and simulations&lt;/p&gt;

&lt;p&gt;np.ones(shape)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.ones((3, 3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates an array filled with ones.&lt;/p&gt;

&lt;p&gt;Common use&lt;br&gt;
Normalization&lt;br&gt;
Bias initialization&lt;br&gt;
Sanity checks and testing&lt;/p&gt;

&lt;p&gt;np.full(shape, value)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.full((2, 3), 7)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates an array with a constant value.&lt;/p&gt;

&lt;p&gt;Why&lt;br&gt;
Sentinel values&lt;br&gt;
Mask initialization&lt;br&gt;
Controlled default states&lt;/p&gt;

&lt;p&gt;np.arange(start, stop, step)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.arange(0, 10, 2)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates evenly spaced values (stop excluded).&lt;/p&gt;

&lt;p&gt;Best for&lt;br&gt;
Index-based loops&lt;br&gt;
Discrete ranges&lt;br&gt;
Performance-critical iteration&lt;/p&gt;

&lt;p&gt;np.linspace(start, stop, num)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.linspace(0, 10, 5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates evenly spaced values (stop included).&lt;/p&gt;

&lt;p&gt;Used in&lt;br&gt;
Plotting&lt;br&gt;
Simulations&lt;br&gt;
Continuous mathematical domains&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Array Properties &amp;amp; Shape Control
Understanding shape and memory is non-negotiable.
Core attributes
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arr.shape   # dimensions
arr.ndim    # number of axes
arr.size    # total elements
arr.dtype   # data type

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Why this matters&lt;br&gt;
Bugs in NumPy are usually shape bugs&lt;br&gt;
Performance depends on correct dimensionality&lt;/p&gt;

&lt;p&gt;reshape()&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arr.reshape(3, 4)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changes shape without copying data.&lt;/p&gt;

&lt;p&gt;Rule:&lt;br&gt;
Total elements must remain constant.&lt;/p&gt;

&lt;p&gt;flatten() vs ravel()&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flatten() → returns a copy
ravel() → returns a view when possible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rule&lt;br&gt;
Use ravel() for performance&lt;br&gt;
Use flatten() when isolation is required&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stacking &amp;amp; Axis Manipulation
Combining arrays is common in real pipelines.
Stacking
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.hstack()   # column-wise
np.vstack()   # row-wise
np.dstack()   # depth-wise
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Used when assembling datasets, images, feature blocks.&lt;/p&gt;

&lt;p&gt;Transposition&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arr.T
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Swaps axes (rows ↔ columns).&lt;/p&gt;

&lt;p&gt;swapaxes(axis1, axis2)&lt;/p&gt;

&lt;p&gt;Used for 3D+ tensors, common in:&lt;br&gt;
Computer vision&lt;br&gt;
Deep learning&lt;br&gt;
Physics simulations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Broadcasting — Why NumPy Is Fast
Broadcasting lets NumPy operate on arrays of different shapes without copying memory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Rules (simplified)&lt;br&gt;
Dimensions must match or&lt;br&gt;
One dimension must be 1&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
(3, 3) + (3,)&lt;br&gt;
The smaller array is virtually expanded, not duplicated.&lt;br&gt;
This is why NumPy avoids Python loops.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mathematical Operations (Vectorized)
All operations are element-wise by default.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Arithmetic&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.add(a, b)
np.subtract(a, b)
np.multiply(a, b)
np.divide(a, b)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Powers &amp;amp; roots&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.power(a, 2)
np.sqrt(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exponentials &amp;amp; logs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.exp(a)
np.log(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trigonometry&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.sin(a)
np.cos(a)
np.tan(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key idea&lt;br&gt;
No loops. Ever.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Statistical Functions
Central tendency
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.mean(a)
np.median(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Spread&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.var(a)
np.std(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variance → data spread&lt;/p&gt;

&lt;p&gt;Std deviation → distance from mean&lt;/p&gt;

&lt;p&gt;Aggregation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.max(a)
np.min(a)
np.sum(a)
np.cumsum(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Index-based results&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.argmax(a)
np.argmin(a)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns indices, not values—critical in optimization and ML.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Correlation &amp;amp; Relationships
Variance across datasets
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.var(a)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Correlation matrix&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.corrcoef(x, y)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Properties:&lt;br&gt;
Values ∈ [-1, +1]&lt;br&gt;
Diagonal = 1&lt;br&gt;
Measures linear relationship strength&lt;/p&gt;

&lt;p&gt;Used in:&lt;br&gt;
Feature selection&lt;br&gt;
Financial analysis&lt;br&gt;
Signal processing&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Random Number Generation
Uniform distribution
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.rand()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Normal distribution&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.randn()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mean = 0, Std = 1&lt;/p&gt;

&lt;p&gt;Used in:&lt;br&gt;
Natural processes&lt;br&gt;
ML weight initialization&lt;/p&gt;

&lt;p&gt;Random integers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.randint(1, 10, (3, 3))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sampling&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.choice(data, size=4, replace=True, p=None)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supports:&lt;br&gt;
Replacement&lt;br&gt;
Probability bias&lt;/p&gt;

&lt;p&gt;Shuffling&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.shuffle(a)       # in-place
np.random.permutation(a)   # copy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reproducibility&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.random.seed(42)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mandatory for:&lt;br&gt;
Experiments&lt;br&gt;
Debugging&lt;br&gt;
Scientific results&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;File Handling&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;np.loadtxt()&lt;br&gt;
Fast&lt;br&gt;
Strict&lt;br&gt;
Numeric only&lt;br&gt;
No missing values&lt;/p&gt;

&lt;p&gt;np.genfromtxt()&lt;br&gt;
Handles missing values&lt;br&gt;
Mixed dtypes&lt;br&gt;
Can fill NaNs&lt;/p&gt;

&lt;p&gt;NaN detection&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.isnan(a)
np.memmap()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Used for datasets larger than RAM.&lt;/p&gt;

&lt;p&gt;Critical in:&lt;br&gt;
Big data&lt;br&gt;
Genomics&lt;br&gt;
Financial tick data&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Linear Algebra — The Core Power
Dot product / multiplication
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.dot(a, b)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Handles:&lt;br&gt;
Vector dot product&lt;br&gt;
Matrix multiplication&lt;br&gt;
1D  2D combinations&lt;/p&gt;

&lt;p&gt;Strict matrix multiplication&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.matmul(a, b)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Solving linear systems&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.linalg.solve(A, B)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Solves:&lt;br&gt;
AX = B&lt;/p&gt;

&lt;p&gt;Inverse &amp;amp; determinant&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.linalg.inv(A)
np.linalg.det(A)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Eigen decomposition&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.linalg.eig(A)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns:&lt;br&gt;
Eigenvalues&lt;br&gt;
Eigenvectors&lt;/p&gt;

&lt;p&gt;Used in:&lt;br&gt;
PCA&lt;br&gt;
Stability analysis&lt;br&gt;
Physics models&lt;/p&gt;

&lt;p&gt;Singular Value Decomposition&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;U, S, Vt = np.linalg.svd(A)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interpretation:&lt;br&gt;
U → input space rotation&lt;br&gt;
S → importance (strength)&lt;br&gt;
Vt → output space rotation&lt;/p&gt;

&lt;p&gt;Foundation of:&lt;br&gt;
PCA&lt;br&gt;
Compression&lt;br&gt;
Noise reduction&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Norms — Magnitude &amp;amp; Distance
Vector norms
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;np.linalg.norm(v)            # L2
np.linalg.norm(v, ord=1)     # L1
np.linalg.norm(v, ord=np.inf)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Matrix norms&lt;br&gt;
Frobenius norm&lt;br&gt;
Max row / column sum&lt;br&gt;
Spectral norm (via SVD)&lt;/p&gt;

&lt;p&gt;Used in:&lt;br&gt;
Optimization&lt;br&gt;
Regularization&lt;br&gt;
Model stability&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
