DEV Community

Cover image for A brief intro on the correlation between AI and Maths.
Anushlinux
Anushlinux

Posted on

A brief intro on the correlation between AI and Maths.

Artificial intelligence problems constitute two general categories: Search problems and representation of intercon...blah blah blah...some big and smart words...regression vectors probablites calculus conjectures etc...

This a basic layout of what you will see on every website when you google "Maths and AI"->
A reputed maths blog website

Always the same....

Image description

Even if you do not look closely you see that the Mathematical topics which are correlated with AI are probabilites, vectors, calculas, linear algebra etc. Given the fact that's true and if you want to learn deep into the actual mathematical part of any ai development their are multiple other resources which you can refer to so lets dive deep into something else.

Lets learn on how this correlation formed.

History of machines

We all know that we as a human beings have been using machines from a very early age of humankind. But we also have long used mathematics to solve problems with machines. Romans build abacus to solve basic calculation but that's boring what about modern age computers?

Well that we started to use from around 300 - 400 years back. Now hold on, yes electronic computers were first invented around 1930-40 but computers were not always electronic. They were actually HUMAN!.

In the earlier days computer was a person who computes any from of data on which he was trained on and their used to be 100s of such people in a single room to do any sort of complex calculation. So you can say in a weird sense model training and data clustering in machine learning, were intuitively applied long before the invention of modern machine learning algorithms.

The most basic and earliest use of computers has been tables. You most probably know about log tables, so in order to compute the sine and cosine you used a computer to generate the result. These tables are still very much used in computer science we just call them database now.

Many of the important theorems in maths like the prime number theorem was proved by the help of a computer (given that Gauss himself was a bit of a computer he still used different computers to solve most of the calculations)

Proofs assists using machines

Now lets jump on to the year of 1976. This was the year in which machines were first used to correctly give a proof of a unsolved mathematical theorem knows as Four Color Sum , it basically states that -

no more than four colors are required to the region of any map so that no two adjacent countries will have the same color

Well to speak freely the machine mostly assisted in the proof of the theorem and it was not before 2005 when a proof assist language called coq was developed (now called rooster). Now after years of improvisation and optimization, the real formalization on this has been since the creation of Lean's mathlib library. It basically contains both programming infrastructure and mathematics, as well as tactics that use the former and allow to develop the latter.

Machine Learning

The most well known usecase of machine learning has been in using neural networks to predict the solution for any xyz question. Now if I was to take an example for the most latest use of nn in maths it would be around knot theory. So a knot is basically a loop of a sting in a space that's closed and not tied at any point. So a basic question in knot theory is if you were given two knots in what sense are they equivalent to one-another. So by training a neural network on 2 million knot database it was able to reverse-engineer the whole concept and was able to find the amount of knots to the polynomial values attached to it.

This breakthrough demonstrates the power of neural networks in tackling abstract mathematical problems. It not only was able to provide a new insights into knot theory but also opens up possibilities for AI to assist in fields like topology, combinatorics, and algebraic geometry (the same topics which we mentioned above in the website layout:))

Large Language Models

Now in recent emergence of the latest llm model of openAI's gpt o1, it was able to solve around 83% of the maths olympiad and reached around 89th percentile with a latest rating of 1800 (fine-tuned) on codeforces competitive programming contests.

Now the issue is o1 was able to perform at such a high efficiency only when the question presented to it were modified in a simplified way to understand and when the model was given the freedom of giving 10,000 testcase submission. But this itself is a huge achievement as its
predecessor gpt-4o was able to solve only around 1%

Now if we talk about gpt 4 or 3.5 or any other llms...we may find that things which we as humans find hard are very simple for a ai to computer but things which are simpler for humans, ai struggle a lot in it (as of now). for eg earlier ai models use to hallucinate if they didnt know the correct any to any solution.

Image description

Also the mathematical aspect of an ai to provide any solution to any problem is to give to guess at each step what is the most natural thing to say and then proceed on with it. Now this error in ai has been reduced a lot with more and larger dataset training and advanced preprocessing of the data

So where are we now??

To be fair we are very far from an ai to actually solve and act as a full fledged proof assist or to resolve any major mathematical problem. Even tho we are continuously pushing the threshold of the creative capacity of ai but its still not full reliable.

Now to end in the future their still be the old fashioned way to solving or proving any problem because without us their wont be anyone to guide these ai's unless we also know how to do it.

But we would be able to do a lots of things which we can't do right now
~Terence Tao

Top comments (0)