DEV Community

Kyle Pena
Kyle Pena

Posted on • Edited on

Investigating the performance of np.einsum

TLDR; tensordot mode (and therefore BLAS) is only activated for two-argument einsum when optimize=True

As a reader of my last blog post pointed out to me, np.einsum is considerably slower than np.matmul unless you turn on the optimize flag in the parameters list: np.einsum(..., optimize = True).

Being somewhat skeptical, I fired up a Jupyter notebook and did some preliminary tests. It's completely true - even for two-operand cases where, as I'll explain, optimize shouldn't make any difference at all!

Matrix multiplication of two C-order (aka row major order) matrices of varying dimensions, np.matmul is consistently about twenty times faster then np.einsum with optimize=False.

optimize False matmul

M1 M2 np.einsum np.matmul np.einsum / np.matmul
(100, 500) (500, 100) 0.765 0.045 17.055
(100, 1000) (1000, 100) 1.495 0.073 20.554
(100, 10000) (10000, 100) 15.148 0.896 16.899

With optimize=True, np.einsum is only about 1.5 times slower at worst! So, simply passing optimize=True results in a roughly 13x speedup. Why isn't optimize=True the default behavior?

optimize True matmul

M1 M2 np.einsum np.matmul np.einsum / np.matmul
(100, 500) (500, 100) 0.063 0.043 1.474
(100, 1000) (1000, 100) 0.086 0.067 1.284
(100, 10000) (10000, 100) 1.000 0.936 1.068

Why?

This is mysterious - optimize=True is about finding an optimal contraction order of indices. That only matters when there are more than two operands. Yet our tests show a substantial speedup nonetheless.

By way of contrast, here's an example where optimize=True should make a difference - the chained dot product:

X = np.einsum('ij,jk,kl,lm->im',a,b,c,d)
Enter fullscreen mode Exit fullscreen mode

We have four operands. Due to associativity, all of these expressions are mathematically equivalent:

  • <a,<b,<c,d>>>
  • <<a,b>,<c,d>>
  • <<<a,b>,c>,d>
  • <a,<<b,c>,d>>
  • <<a,<b,c>>,d>

...and optimize picks the one that is likely to result in a speedup.

Maybe optimize is doing more than optimizing contraction order? Perhaps it is somehow aware of row-major vs column-major layout of the matrices in memory, and somehow being proactive about it? Like, picking a different matrix multiplication algorithm entirely if the second argument is in row-major order?

But no - np.einsum is at least 1.5 times slower with a column major second argument.

matmul Optimize false column major 2nd arg

M1 M2 np.einsum np.matmul np.einsum / np.matmul
(100, 500) (500, 100) 1.486 0.056 26.541
(100, 1000) (1000, 100) 3.885 0.125 31.070
(100, 10000) (10000, 100) 49.669 1.047 47.444

Going Deeper

Here are the release notes of [Numpy 1.12.0]:

np.einsum now supports the optimize argument which will optimize the order of contraction. For example, np.einsum would complete the chain dot example np.einsum(‘ij,jk,kl->il’, a, b, c) in a single pass which would scale like N^4; however, when optimize=True np.einsum will create an intermediate array to reduce this scaling to N^3 or effectively np.dot(a, b).dot(c). Usage of intermediate tensors to reduce scaling has been applied to the general einsum summation notation. See np.einsum_path for more details.

Later release notes indicate that np.einsum was upgraded to use tensordot (which itself uses BLAS where appropriate). Aha!

If we read def einsum(*operands, out=None, optimize=False, **kwargs) in numpy/numpy/_core/einsumfunc.py, we'll come to this early-out logic almost immediately:

    # If no optimization, run pure einsum
    if optimize is False:
        if specified_out:
            kwargs['out'] = out
        return c_einsum(*operands, **kwargs)
Enter fullscreen mode Exit fullscreen mode

Does c_einsum utilize tensordot? I doubt it. Later on in the code, we see the tensordot call that the 1.14 notes seem to be referencing:

    # Start contraction loop
    for num, contraction in enumerate(contraction_list):

        ...

        # Call tensordot if still possible
        if blas:

            ...

            # Contract!
            new_view = tensordot(
                *tmp_operands, axes=(tuple(left_pos), tuple(right_pos))
            )
Enter fullscreen mode Exit fullscreen mode

So, here's what's happening:

  1. The contraction_list loop is executed if optimize is True - even in the trivial two operand case.
  2. tensordot is only invoked in the contraction_list loop.
  3. Therefore, we invoke tensordot (and therefore BLAS) when optimize is True.

To me, this seems like a bug. IMHO, the "early-out" at the beginning of np.einsum should still detect if the operands are tensordot-compatible and call out to tensordot if possible. Then, we would get the obvious BLAS speedups even when optimize is False. After all, the semantics of optimize pertain to contraction order, not to usage of BLAS, which I think should be a given.

The boon here is that persons invoking np.einsum for operations that are equivalent to a tensordot invocation would get the appropriate speedups, which makes np.einsum a bit less dangerous from a performance point of view.

How does c_einsum actually work?

I spelunked some C code to check it out. The heart of the implementation is here.

After a great deal of argument parsing and parameter preparation, the axis iteration order is determined and a special-purpose iterator is prepared. Each yield from the iterator represents a different way to stride over all operands simultaneously.

    /* Allocate the iterator */
    iter = NpyIter_AdvancedNew(nop+1, op, iter_flags, order, casting, op_flags,
                               op_dtypes, ndim_iter, op_axes, NULL, 0);
Enter fullscreen mode Exit fullscreen mode

Assuming certain special-case optimizations don't apply, an appropriate sum-of-products (sop) function is determined based on the datatypes involved:

    sop = get_sum_of_products_function(nop,
                        NpyIter_GetDescrArray(iter)[0]->type_num,
                        NpyIter_GetDescrArray(iter)[0]->elsize,
                        fixed_strides);
Enter fullscreen mode Exit fullscreen mode

And then, this sum-of-products (sop) operation is invoked on each multi-operand stride returned from the iterator, as seen here:

        iternext = NpyIter_GetIterNext(iter, NULL);
        if (iternext == NULL) {
            goto fail;
        }
        dataptr = NpyIter_GetDataPtrArray(iter);
        stride = NpyIter_GetInnerStrideArray(iter);
        countptr = NpyIter_GetInnerLoopSizePtr(iter);
        needs_api = NpyIter_IterationNeedsAPI(iter);

        NPY_BEGIN_THREADS_NDITER(iter);
        NPY_EINSUM_DBG_PRINT("Einsum loop\n");
        do {
            sop(nop, dataptr, stride, *countptr);
        } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
Enter fullscreen mode Exit fullscreen mode

That's my understanding of how einsum works, which is admittedly still a little thin - it really deserves more than the hour I've given it.

But it does confirm my suspicions, however, that it acts like a generalized, gigabrain version of the grade-school method of matrix multiplication. Ultimately, it delegates out to a series of "sum of product" operations which rely on "striders" moving through the operands - not too different from what you do with your fingers when you learn matrix multiplication in school.

Summary

So why is np.einsum generally faster when you call it with optimize=True? There are two reasons.

The first (and original) reason is it tries to find an optimal contraction path. However, as I pointed out, that shouldn't matter when we have just two operands, as we do in our performance tests.

The second (and newer) reason is that when optimize=True, even in the two operand case it activates a codepath that calls tensordot where possible, which in turn tries to use BLAS. And BLAS is as optimized as matrix multiplication gets!

So, the two-operands speedup mystery is solved!

Top comments (0)