DEV Community

Terra
Terra

Posted on • Originally published at pourterra.com

Mathematics for Machine Learning - Day 14

second weekly summary

Second Weekly Summary

Another week another summary. For this week, I felt most of it follow one another, from sets, to groups, to abelian groups, and so on. So keeping track of the progress is easy.

Then came ranks and vector mapping. Though these topic in itself isn't the problem, I still don't understand the bigger picture for the relationship with other subjects that has been discussed. So, today it's a review on Rank and Linear Mapping, while starting off briefly regarding the other topic first.

Vector space and the whole shebang.

  1. Set, a list / collection of unique numbers, this can range from numbers, vectors, types of cats, or anything else, so long as they're unique then they're a set

  2. Group, a set with an operation that defined the set to keep some structure in the set.

  3. Vector space, a group that fulfill certain conditions of a group such as closure, neutral element, etc.

  4. Linear Independence, if each column in a vector is a pivot column, then they're linearly independent.

  5. Basis, a minimal subset of a vector where the basis can create the original vector space by linear combination but a basis cannot be created by the same method.

Rank

The amount of linearly dependent column in a matrix, if the matrix are all linearly dependent, then that matrix is a full rank.

I know what I rank is, I know how to identify one, but...

What's the use of it?

Right now, there's a few correlations, such as:

  1. Invertibility, a matrix is invertible if and only if the matrix is a full rank
  2. Linear independence, technically this is the same as the first point, but this alone is also true, that if each column is a pivot column and all of it is unique, then that matrix is a full rank.

Since this is one of the things that will be explain later and the impact isn't really known right now, I can't say much about it ヽ(ヅ)ノ

Linear mapping

Linear mapping is mapping to a vector that will also result in a vector as well.

Mapping itself is fine since it's applying a function to a vector, this function can be a matrix or a more complex formula to the vector.

But my problem and want to try to explain more is:

1. Injective

Ensures that each element in the vector mapped will result in a unique value for the resulting vector as well.

2. Surjective

From what I understand, this just means that it applies to more than one element, so, for example, a mapping of the function squared.

When a vector is squared element wise, the resulting vector with the value 4 can come from 2 and -2, that means it doesn't result in a unique value.

3. Bijective

When injective meets with surjective.

How? so, we take the unique value from injective and we take the more than one element from surjective. So, more than one element are mapped into the same element to the resulting vector.

Butt, the resulting map will return unique values. So this means that surjective may contain duplicates in the result so long as it maps more than one element.

Special cases for linear mapping

I'll be honest, aside from isomorphism, I not keen on learning more since this will be explained later on and it feels like a spoiler more than anything :D

So, I'll only summarize isomorphism. Isomorphism is a case of linear mapping when the dimension of the previous vector is the same as the resulting vector (dim(V) = dim(W)).

What I always find surprising is what the author said "intuitive", which means that technically isomorphic linear mapping is the same vector before and after since they can be transformed from one another without losing any information from the vector.

That's it

There isn't much that I can say explained fully where I can grasp it with other topics, but this doesn't mean there isn't any benefit from it. So I can't wait to start moving forward again tomorrow, hope this week is useful!


Acknowledgement

I can't overstate this: I'm truly grateful for this book being open-sourced for everyone. Many people will be able to learn and understand machine learning on a fundamental level. Whether changing careers, demystifying AI, or just learning in general, this book offers immense value even for fledgling composer such as myself. So, Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, thank you for this book.

Source:
Axler, Sheldon. 2015. Linear Algebra Done Right. Springer
Deisenroth, M. P., Faisal, A. A., & Ong, C. S. (2020). Mathematics for Machine Learning. Cambridge: Cambridge University Press.
https://mml-book.com

Top comments (0)