DEV Community

Victoria Johnston
Victoria Johnston

Posted on

AI Log #4: Vectorization & NumPy in Machine Learning

I am an experienced software engineer diving into AI and machine learning. Are you also learning/interested in learning?
Learn with me! I’m sharing my learning logs along the way.

Vectorization

Vectorization is a pivotal concept in machine learning; it enables us to handle multi-feature data with efficiency and speed. This learning log delves into vectorization, with a focus on NumPy's role in optimizing operations.

NumPy

NumPy is an essential Python library in machine learning known for its powerful handling of arrays and matrices. It's the go-to tool for vectorized operations.

Why use vectorization?

Vectorization optimizes data processing by handling entire arrays or datasets simultaneously, rather than element-by-element, as in traditional loops. This approach is especially powerful in NumPy due to its ability to leverage the parallel hardware capabilities of computers. Operations like the dot product are executed in parallel, markedly enhancing code efficiency and execution speed compared to sequential for loops.

The advantage of vectorization becomes even more pronounced in large datasets and complex operations, such as during the gradient descent process in machine learning, where it allows for the simultaneous computation of new feature values.

Vectorization also helps simplify our code by reducing complexity. The simple NumPy commands below will show what I mean by this.

tl;dr - if we want our data processing to run faster and more efficiently, we need to use vectorization!

How to use vectorization?

In machine learning, models usually have multiple features. These features are often represented in vector format. Consider a model with four features. In such a case, the example data, xi, consists of input features represented as a vector: [*xi*1, *xi*2, *xi*3, *xi*4]. Each of these features is associated with a corresponding weight ([w1, w2, w3, w4]), determined during the model's training phase.

To use vectorization effectively, we want to:

  1. Use NumPy arrays to represent these vectors: Vectorization is implemented using NumPy arrays, which enable efficient storage and manipulation of data.
  2. Avoid for loops: Traditional for loops process data element by element and are generally slower. Vectorization replaces these loops with array operations, significantly enhancing computational speed.
  3. Leverage dot product and other NumPy functions: The dot product is a common vectorized operation in machine learning. NumPy's various functions, such as np.dot, np.sum, and np.mean, are designed to operate on whole arrays, making them ideal for vectorized computations.

Key Concepts in NumPy

  • Dimensionality: This refers to the number of indices required to select an element from an array. A one-dimensional array needs one index, a two-dimensional array requires two, and so forth.
  • Shape: This attribute describes an array's dimensions. A 2x3 2D array has a shape of (2,3), while a one-dimensional array (vector) might have a shape of (n, ) for 'n' elements. Individual elements, not being arrays, have a shape of ().
  • Multidimensional Arrays: NumPy arrays can extend beyond two dimensions, adding complexity and flexibility to the data structure and its manipulation.

Common Vectorized NumPy Operations

Creating Arrays

  • np.zeros(num): Generates an array of 'num' zeros, defaulting to float64. For example, np.zeros(3) yields an array shaped (3, ).
  • np.random.random_sample(num): Produces an array of random float64 numbers drawn from a uniform distribution over [0, 1). The argument shape should be a tuple that defines the dimensions of the output array. For example calling random_sample((2, 3)) creates a 2x3 array.
  • np.random.rand(num): Also generates random float64 numbers from a uniform distribution over [0, 1). Unlike random_sample, rand accepts multiple arguments, each representing a dimension of the desired output array. For instance, rand(2, 3) directly creates a 2x3 array. It offers a more intuitive way to specify the shape of multidimensional arrays.
  • np.arange(num): Creates an array filled with numbers from 1 up to num-1.
  • np.array([...]): Creates an array with specified values.

Indexing Arrays

  • a[0]: Accesses the first element of the array. This operation is consistent with standard coding practices. Attempting to access an index out of range will result in an error.
  • a[-1]: Accesses the last element of the array.

Slicing Arrays

  • a[:3]: Retrieves elements at indices up to (but not including) index 3.
  • a[3:]: Selects elements from index 3 to the end of the array.
  • a[:]: Accesses all elements within the array.
  • a[start:stop:step]: Provides a more flexible slicing option, allowing the selection of elements over a specified range and step.

Single Array Operations

  • -a: Negates each element in the array.
  • np.sum(a): Calculates the sum of all elements in the array.
  • np.mean(a): Computes the average value of the elements in the array.
  • a2**: Squares each element in the array.

Element-Wise Array Operations

  • np.array([-1, 1]) + np.array([1, -1]): Results in an array [0, 0]. These operations are applied element-wise.
  • np.array([1, 2]) * 2: multiplies each element of the array by 2.

Dot Product

  • np.dot(a1, a2): Multiplies vectors element-wise and then sums the results, a fundamental operation in many machine learning algorithms. For example, using np.dot(a1, a2) where a1 = [1, 2] and a2 = [3, 4] would compute (1*3) + (2*4).

Matrices

Matrices, or two-dimensional arrays, are integral in machine learning, and NumPy provides a comprehensive set of functions for their creation and manipulation.

Matrix Creation

  • np.zeros((m, n)): Generates a matrix of zeros with m rows and n columns.
  • np.random.random_sample((m, n)): Creates a matrix filled with random numbers.
  • np.array([[1], [2], [3]]): Creates a matrix with specified values.

Reshaping Matrices

  • X.reshape(2, 3): Changes the shape of the matrix to 2 rows and 3 columns.
  • X.reshape(-1, 1): Reshapes the matrix into a column vector.

Slicing Matrices

  • X[r, start:stop:step]: Accesses elements in row r within a specified range and step.
  • X[:, start:stop:step]: Retrieves elements across all rows within a specified range.
  • X[:, :]: Selects all elements in the matrix.
  • X[1, :]: Accesses all elements in row 1.

Advanced Matrix Operations

  • np.c_[...]: Concatenates arrays along their column boundaries.
  • np.ptp(arr, axis=0): Calculates the peak-to-peak (maximum - minimum) range of elements column-wise.

These indexing, slicing, and operation techniques in NumPy enable efficient handling and manipulation of data in machine learning, demonstrating the practical benefits of vectorization.

I only showed a few examples. NumPy is an incredibly extensive library, and its mathematical functions go far beyond simple array operations!

Summary

Vectorization, particularly when utilizing NumPy, is an essential concept in machine learning. It dramatically streamlines and accelerates computations, making processing large datasets and executing complex operations much more efficient.

By leveraging NumPy's array-centric design and functions, we can perform bulk operations on data without the need for slow, iterative loops. This approach not only speeds up the execution but also makes the code more readable and concise.

Disclosure

I am taking Andrew Ng’s Machine Learning Specialization, and these learning logs contain some of what I learned from it. It’s a great course. I highly recommend it!

Top comments (0)