Abstract
Geometric Empirical Modeling (GEM) is a new branch of non-linear mathematics for solving a wide variety of problems better and faster than any existing methods, including statistics, linear regression, predictive analytics, linear algebra, function approximation, optimization, inversion, process control, engineering design, neural networks, artificial intelligence, and machine learning. This paper is a brief introduction to GEM capabilities, performance, and applications.
Introduction
Mathematics progresses in steps, with a major advancement occurring every few centuries or so. For example, Analytical Geometry (Calculus) provided solutions to a whole host of problems that were difficult or impossible to solve before the advancement. GEM is at a similar scale, providing simple solutions to a wide variety of problems that are currently difficult or impossible to solve.
Imagine that the Fourier transform did not exist, but someone discovered that it was possible to approximate a signal by combining cosine functions with different frequencies, phases, and amplitudes. Experts could examine a signal, determine an educated guess for which frequencies the signal might contain, then iteratively modify the phase and amplitude of each frequency until arriving at a rough approximation of the desired signal after months of computation. Then, suppose someone like Fourier came along with the ability to compute a perfect solution to any signal in seconds. Would his solution be instantly accepted and implemented, or would he face decades of skepticism and criticism, just as Isaac Newton experienced after developing Calculus? This is the problem with all paradigm shifts, especially in science, mathematics, and programming. "A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Max Planck. [1]
Suppose the only way to solve for the x vector in Ax=b would be iteratively modify x and multiplying A * x until reaching a rough approximation for b. Then suppose someone developed LU Decomposition with Back-Substitution and could invert A in O(N^2) with no iterations and compute a perfect solution for x in seconds. Then GEM comes along and can solve for x in O(1), instantaneously. If A is non-singular, Singular Value Decomposition (SVD) can solve for x in O(N^3). Again, GEM can find the same solution for x in O(1), instantaneously.
Incredible? Unbelievable? Consider Neural Networks and Deep Learning.
Imagine building a complex neural network with hundreds of hidden layers, thousands of parameters, and millions of training examples. Imagine not using a hundred-billion-dollar AI Factory, but a single laptop with a single GPU. Imagine computing a perfect zero-error or best-fit solution on the first attempt in a single GPU function call in milliseconds instead of months on a supercomputer. Imagine developing a neural network that closely mimics the structure, function, and appearance of a biological neural network, using neurons with non-linear combination of inputs, with thousands of neurons per layer, with thousands of hidden layers structured in two-hemispheres that not only "learns", but "thinks", using modern conventional terminology. Enter the world of Geometric Empirical Modeling — a mathematical technique for solving neural networks in closed-form, as accurate, efficient, and direct as solving a matrix or a Fourier transform.
Figure 1. AI Neural Network generated using GEM
Relevant History of AI
AI was coined in the late 1950’s by John McCarthy, a mathematician and computer scientist. A few years later he created the programming language LISP. [2]
AI took a turn in the late 1960’s with the development of ELIZA at MIT, in just 200 lines of code. People spent hours talking to the program, convinced they were speaking to an actual person, even after being shown the code. [3] This set the stage for defining AI as a way to give computers the appearance of human intelligence.
ELIZA proved that even a simple AI program could be very convincing and effective. Perhaps this was the inspiration for a college prank, again from MIT, where three students took ELIZA to the next level with higher speed computing and more sophisticated programming. [4] These students wrote a program that stitched together key words and phrases to automatically generate technical papers complete with charts, diagrams, tables, equations, and references. The papers appeared to be very sophisticated but were actually nothing more than word-salad. The papers passed through peer review and the editors and were published in prestigious journals. The students were reprimanded, but not before making a mockery of the entire scientific publication process.
Large Language Models (LLMs) and ChatGPT chatbots were the next logical progression in AI. If distinguished journals could be tricked, how about the general public, investors, corporations, and governments? Extravagant claims and hype made matters worse, such as promising to replace 300 million full-time jobs. [5] Glorified autocompletion and text-stitching was touted as sentient, capable of thought and reasoning. Errors were branded as hallucinations, another form of thought. These systems were capable of passing a multiple-choice bar exam, while at the same time failing out of pre-school.
Hype, although highly misleading, is entertaining, mysterious, and attracts both attention and funding. The hope is that some miracle will come along that will make those exaggerated claims and promises a reality. When this doesn't happen, disillusionment sets in, and everyone hops on the next bandwagon to come along.
AI Neural Networks have their share of exaggeration and hype. AI Neural Networks claim to be numerical models of the human brain. This claim increases expectations that one day AI Neural Networks will surpass human intelligence, if they have not done so already. This claim explains why AI Neural Networks are so inefficient to train, error-prone, inaccurate, and may never learn a given problem at all. Since we do not fully understand how the human brain works, we also cannot fully understand how AI Neural Networks learn and provide answers. This claim eliminates all potential to understand AI Neural Networks. If a mathematician discovers a solution for how to instantly build and train AI Neural Networks, this can be dismissed out of hand. Claiming to understand AI Neural Networks implies unlocking the secrets of the human brain, adding more fuel to the fire of high expectations, exaggeration, and hype. That is not the intention of this paper. Although claims in this paper are reminiscent of a paradigm shift, the claims are real. There are no unvalidated exaggerations or hype.
How Artificial Neural Networks are used in Practice
Although rarely discussed, this is how neural networks are used to solve problems in supervised learning, such as function approximation.
- Generate a training set of inputs and outputs x => y, where x is the independent vector, and y is dependent, or caused by, x
Make sure to put all inputs in x and all outputs in y. This is not always trivial or even possible. For example, does heart disease cause diabetes, or does diabetes cause heart disease, or are they both caused by something else, do they cause effects in other factors, or are they caused by other factors, or are they completely unrelated?
Remove all correlations. For example, if weight and BMI (body mass index) are both inputs, they are correlated. If weight increases, then BMI should also increase. So, either replace BMI with height, or remove weight.
Remove all training examples with unknowns, errors, and outliers. What is an error? If a person’s age is recorded as 21, but the age is actually 21.8, is that an error?
Remove inputs and outputs that are extraneous, random, or constant.
Reduce dimensionality. For example, if there are 3 inputs for low, medium, and high, combine these into a single number as 0, 0.5, and 1.
Reduce non-linearity. Use pre-processing, transforms, or other computations to make non-linear data more linear and monotonic. Neural networks can get stuck and can fail to learn non-linear data.
- Determine the number of hidden layers and the number of nodes in each layer.
Should each layer have the same number of nodes, or different?
If there are too many layers and nodes, interpolation may wildly oscillate, referred to as over-fitting. If there are too few, the output surface may be too linear or flat, referred to as under-fitting.
Since most neural networks are not adaptive, or don’t have the ability to add or remove layers and nodes during training, the number of layers and nodes must be predetermined. This is a trial-and-error process that may require several attempts and thorough testing for each attempt.
- Determine a learning rate.
Most neural networks do not have an adaptive learning rate, so this must be pre-determined and adjusted by experiment.
Too high and convergence will be unstable, and training will have to be restarted. Too low and convergence will take too long.
- Train
This requires numerous iterations, back-propagating errors to adjust weights and offsets.
Training frequently does not converge to an adequate solution, referred to as a local minimum. This means starting over from step 1.
- Test and evaluate the solution
Check the accuracy at the training examples. Training almost never maps the training example inputs to outputs with zero error or with statistical best-fit surfaces.
Check interpolation accuracy. A separate test set is used to evaluate how well the network performs on examples it has not encountered. Care must be taken to ensure there are no errors in the testing data. Testing data must also be extensive to ensure good coverage, especially in critical regions.
Training never converges to a zero-error or best-fit solution. This means that solutions to function approximation will never match the accuracy of equations, or problems that can be solved with statistics and linear regression. The evaluation is usually considered successful if any good performance can be detected at any location.
If evaluation is not successful or there is potential for improvement, start over from step 1.
Developing neural networks requires considerable time and effort to formulate the problem, train, and test. Results are relatively poor for function approximation or statistical analysis. Success is rare and often exaggerated.
However, these challenges and disadvantages are easily dismissed. Neural networks model the human brain. People require considerable time to learn, or may never learn how to solve certain problems or perform certain tasks. People make mistakes. Computers are very good at math and computation, but people are not. No one understands how the brain works, and no one can explain how artificial neural networks solve problems either.
Need for AI
LLMs and chatbots are not the entirety of AI. There are other branches of AI with different goals and capabilities, that come with their own sets of advantages and disadvantages.
These types of AI have developed due to a need for improved analysis and predictive analytics of real-world data, beyond the current capabilities of statistics. There is a need for multi-variate empirical function approximation with corresponding function optimization. New mathematics are required that go beyond current capabilities in matrix operations, numerical modeling, process control, signal analysis, and engineering design.
AI Neural Networks were developed, not so much as a simulation of biological neural networks, but as a new approach to solving problems and achieving superior solutions. Although AI Neural Networks are extremely inaccurate, inefficient and impractical, they have gained popularity by obtaining crude solutions to multi-variate non-linear empirical problems. In other words, AI Neural Networks can analyze and model real-world data better than any other existing technology.
What is now needed is an accurate, efficient, and practical method for analyzing real-world data.
Structure of a GEM AI Neural Network
GEM is a mathematical technique that can, among other things, build and solve an AI Neural Network given a set of empirical examples and a desired error tolerance. The structure includes nodes, links, layers, connection weights, activation functions, and neuron offsets, achieving a zero-error or best-fit solution with no training. Although the same structure could be presented in other mathematical terms such as a non-linear combination of vectors using basis functions, the terminology of the AI Neural Network has been preserved as it is excellent for real-time multi-variate data analysis and modeling. However, defining this structure in purely mathematical terms has several advantages, as it avoids the mysterious nature of biological neural networks. Unlike biological neural networks, AI Neural Networks can be fully understood, described, and directly solved with mathematics. This opens up an entire field of mathematics and provides new approaches to problem solving and modeling beyond existing methods.
Artificial neural networks are thought to mimic the structure and function of biological networks. Although biological neural network structure is complex, many aspects of the basic structure of biological neural networks are being explored and new insights are gained with further research. For example, biological neurons are far more complicated than a simple summation of input signals coupled with a threshold. “Recent studies show that individual neurons utilize a wealth of nonlinear mechanisms to transform synaptic input into output firing.” [6] Neurons have mechanisms to not only sum input signals, but also multiply them and combine them using a variety of non-linear operations.
Biological neural networks not only have small-scale structure, such as individual neurons and synapsis connections, but also large-scale structures, such as a partially symmetric two-hemisphere structure as seen in mammals, reptiles, and birds. GEM AI neural networks employ this left and right brain structure, as well as non-linear combinations of neuron input signals, as necessary components for instantaneous construction, training, learning, generalization, interpolation, extrapolation, and thinking.
GEM AI neural networks are deep neural networks, in the sense that they consist of thousands of concurrent hidden layers. Each layer is constructed and trained sequentially, but all layers operate simultaneously. On a GPU, thousands of layers require the same computation time as a single layer. Each layer provides increasing complexity, similar to the structure of a Fourier Transform with frequencies extending from DC-offset to slightly higher and higher frequencies.
Another interesting aspect of GEM AI neural networks is that the neural network size and structure is largely independent of the training set size and complexity. A training set with one input, one output, and two training examples will result in essentially the same neural network size and structure as a training set with hundreds of inputs and outputs and millions of training examples. This is similar to a Fourier Transform. Regardless of the number of frequencies contained in a signal, the resulting transform remains essentially the same.
Biological neural networks can still function even when large portions of the brain are removed [7]. GEM has the interesting property that it still functions even when up to 90% of the structure is removed. The basic trends remain, but accuracy drops with removal until all outputs converge to an average value. Each concurrent layer reduces error, and layers are added until reaching machine precision accuracy. This process is similar to Boosting [8].
The neural network size and structure is also largely independent of the neuron activation function. The activation function may be a step, linear, sigmoid, tanh, Gaussian, or even sines and cosines, and the GEM AI neural network will have approximately the same size and structure and generate approximately the same results.
GEM Training
Training refers to the construction of the entire network structure, including number of hidden layers, size of each hidden layer, all connection weights and offsets, and the non-linear combination of neuron inputs with activation functions that result in optimal interpolation and extrapolation properties. Conventional AI Neural Networks often require trial-and-error to find the number of hidden layers and the size of each hidden layer that results in a solution with an acceptable fit to the training set. Connection weights and offsets are determined using a type of back-propagation gradient descent algorithm with an optimal learning rate that usually requires multiple iterations and may suffer from local minima. Conventional AI Neural Networks require careful data representation and organization of the training set to reduce non-linearities, dimensionality, input correlations, and variance clusters to speed up convergence, avoid local minima, and simplify the problem so that the neural network can provide adequate results.
Training GEM AI neural networks requires no trial-and-error, no learning rate, no iterations, no fitting analysis, no local minima avoidance methods, and no data representation tricks. GEM instantaneously computes the AI Neural Network solution from a training set, similar to how a Fourier Transform instantaneously computes the amplitude and phase of each frequency given samples from a signal.
GEM training can result in solutions at machine precision error, or solutions with best-fit surfaces for data with scatter. The fit is controlled by a tolerance factor specifying allowable error.
GEM Thinking
Thinking is analogous to functional inversion, the determination of inputs resulting in weighted constraints on other inputs and outputs. Training is: "Given x, solve for y". Thinking is: "Given y, solve for x". The results of thinking may or may not be unique, so constraints may be added to locate a unique solution, such as "Given y, solve for the minimum x".
Thinking has a variety of applications in function optimization, process control, and matrix operations. For example, consider the matrix equation Ax=b. GEM can be trained with a set of (x=>b) training examples, so that given any x, GEM can generate the correct b, without performing an Ax multiplication. Thinking allows GEM to generate x given a desired b, without performing a matrix inversion. If the solution is not unique, thinking allows GEM to generate the minimum x given a desired b, which results in the same solution as singular value decomposition (SVD). The difference between GEM and matrix operations is in the computation time. Matrix multiplication is O(N^2), but GEM can generate a solution in O(1). Matrix inversion and singular value decomposition are O(N^3), but GEM is O(1). This means that GEM can perform matrix multiplication of a 2048x2048 A matrix with a speedup of 32 million times, and perform singular value decomposition of the same matrix with a speedup of 140 billion times, including training time.
GEM Data Correction
GEM has support for automatic data correction to fill in missing entries, detect and correct outliers (data entry errors or lie detection), fix jitter caused by roundoff or scatter, and optimization to find the minimum number of training examples required to interpolate and extrapolate the entire training set. Cleaning up and repairing both input and output data can significantly improve accuracy, remove skewing effects, improve optimization, and extract more information from the data.
This capability is not feasible without instantaneous training and thinking. Data correction requires rebuilding and retraining the neural network multiple times and using inversion to determine which inputs or outputs require correction. This can be a time-consuming process even with rapid training and thinking.
GEM Predictive Analytics
Prediction can be complex, especially when multiple correlated inputs are involved. For example, as a person ages, his weight, blood sugar, and blood pressure may increase, decrease, or stay the same. However, certain changes are more likely than others. GEM determines the direction and magnitude of the change by fitting a multi-dimensional surface through a large number of data points. Once an accurate surface is generated, prediction consists of moving along the gradient, either up or down, depending on the direction of travel.
Constraints on inputs must also be considered. For example, blood pressure and pulse may be allowed to change with age, but gender can be set as a constant. Height and number of children can be constrained to be monotonically increasing.
GEM can predict both forward and backward in time. Accuracy of unknown future predictions can be compared to the accuracy of estimating the past. For example, if the model can travel backward in time and accurately estimate weight, blood sugar levels, and blood pressure, before and after child-birth, and estimate the age of the woman at each child-birth, then future predictions of weight, blood sugar, blood pressure, and possible future child-births can be trusted.
The following figure shows how predictive analytics works on a 2D example. If X increases and Y is low, then Y and Z will increase, as shown in image (a) on the lower left. If X increases and Y is high, then Y will increase
and Z will decrease, as shown in image (a) on the upper left. However, when in the exact center, Y may either decrease to the front or increase to the back with equal probability. The direction is deterministic when slightly off-center. The prediction behavior is similar when Y increases, as shown in figure (b). Predicting backward in time would be analogous to X or Y decreasing.
Figure 2. Possible prediction directions on an XOR surface at different locations and gradients. The left image (a) shows direction of travel when X increases. The right image (b) is when Y increases.
GEM vs EDM
Empirical Dynamic Modeling (EDM) is a popular method used for forecasting and predictive analytics [9]. EDM is a step above a table lookup. Instead of simply using the closest point value in a table, EDM finds a set of nearby points and performs a linear simplex interpolation using only those points. Instead of a stepwise functional fit, EDM generates a piece-wise linear fit, with discontinuities in derivatives when moving from one set of nearby points to another.
GEM pre-computes the function approximation and can generate an exact fit or a statistical non-linear best-fit through all the data points. GEM includes all the points and performs both interpolation and extrapolation, providing higher speed and superior surface fitting.
EDM seems to have an advantage because only a few nearby points are required for each time-step evaluation, saving considerable memory. However, all the points must still be stored on disk and accessed when searching for nearby points.
GEM has the following advantages:
- Exact fit or smooth best-fit interpolation and extrapolation, depending on data error tolerance.
- Smooth interpolation and extrapolation, with peaks and troughs of correct shape and location for function minimization or maximization.
- Determines the minimum number of points that can interpolate and extrapolate all the other points, to omit unnecessary points.
- Detects and corrects errors in the data, such as outliers, unknowns, scatter, round-off, or jitter.
- Closed-form mathematical solution in Order(1) for instantaneous computation on the GPU.
- Correctly handles correlated inputs for predictive analytics.
- Supports empirical inversion, or the ability to determine inputs that give desired outputs. For example, GEM could find the input that results in the maximum value in the following figure:
Figure 3. Interpolation Comparison of Table Lookup, EDM, and GEM. GEM will generate a peak at the correct location, as shown with the red arrow.
GEM Case Studies
Proof by induction is a simple and direct method for evaluating the performance of a neural network. Analyze the performance in one-dimension, two-dimensions, and then three-dimensions, and the rest will follow. GEM supports data with thousands to millions of dimensions, but these cases are not included to reduce complexity and avoid hype. The purpose of these case studies is to demonstrate the capabilities and superior performance of GEM on simple and straightforward problems that are beyond the capabilities of the most advanced algorithms in statistics, linear algebra, linear regression, process control, AI, and machine learning. Proof by induction guarantees that GEM will continue to have superior performance at higher dimensions as well.
1D Line
Fitting a straight line through two points is trivial for linear regression but is a very challenging problem for an AI Neural Network, especially when using non-linear activation functions. The poor performance of AI Neural Networks and Deep Learning is one reason this simple training set is rarely published or discussed in the literature.
Figure 4. GEM AI Neural Network line fit through two points with interpolation and extrapolation
The above image shows how GEM performs when given two points. It has zero error at the training points and near perfect linear interpolation and extrapolation. A large neural network structure with nearly a thousand hidden concurrent layers is required, as it is very challenging to fit a perfectly straight line through two points using non-linear sigmoid activation functions.
Despite the large size of 918 concurrent hidden layers, 1838 nodes, and 3676 links, the neural network requires 4 milliseconds to build and train on the GPU and has an evaluation time of 10 nanoseconds.
1D Curve
GEM supports any number of inputs and outputs within memory constraints. In this case, there is one input and two outputs.
Figure 5. GEM AI Neural Network fitting curves through two sets of points. Unknown values are displayed in the data grid in yellow. Originally these values were blank in the data.
This case study shows the accuracy of filling in unknowns given a single input and two outputs. The accuracy of filling in unknowns and correcting outliers increases with the number of inputs and outputs. Even when only a single value in a training example is known, the other values can be determined with surprising accuracy, even when one-third of input and output entries are unknown, as shown above.
1D Outliers
Outliers can skew or otherwise corrupt results, so they should either be removed or corrected. Removing outliers may result in loss of valuable data. GEM can automatically detect and correct outliers, thus preserving information in other inputs and outputs that may be correct.
Outliers not only affect training, they can also affect testing. Outliers in testing data can result in rejection of a correctly trained AI Neural Network, or accepting an incorrectly trained AI Neural Network that by chance correctly matches an outlier.
Outliers may be the result of data entry errors, uncorrected unknown or missing values, deceptive answers to survey questions, or measurement errors. Correcting outliers can significantly improve AI Neural Network accuracy, predictions, and data analysis, depending on the amount of data and the application.
The following image considers the case when apparent outliers should be taken into serious consideration. Using a low tolerance, GEM will attempt to fit a surface through all training points, if possible. This may result in some small oscillations in regions near the apparent outliers.
Figure 6. GEM AI Neural Network fitting a curve through outliers at zero error.
The following image shows the fit when GEM uses a high tolerance. The outliers are effectively ignored, but small skewing is still apparent.
Figure 7. GEM AI Neural Network fitting a curve through outliers at high tolerance.
The following image shows automatic detection and correction of outliers, shown in red. GEM can be instructed to correct the outlier inputs with a higher priority than the outputs, or vice versa. With more inputs and outputs, GEM will make the least number of corrections required, which means that it can determine which inputs or outputs are incorrect and make the proper corrections.
Figure 8. GEM AI Neural Network correcting outliers.
The following image shows automatic feature point determination in the presence of outliers. Only the end-points are required to interpolate and extrapolate all the points in the set.
Figure 9. GEM AI Neural Network with feature points, after outlier corrections.
1D Scatter
Small errors within the error tolerance are referred to as scatter, or jitter. Jitter can be caused by round-off or small measurement errors, and may also be the result of missing variables or low dimensionality. Jitter, like outliers, can skew results. GEM can correct jitter and significantly reduce its effects.
The following image shows the GEM fit using a low tolerance. GEM will fit a curve through all the points with minimum error.
Figure 10. GEM AI Neural Network fit to scatter with low tolerance.
The following image shows the GEM fit using a high tolerance. This line corresponds to the statistical linear-regression best-fit.
Figure 11. GEM AI Neural Network fit to scatter with high tolerance.
The following image shows corrected jitter and the selected feature points.
Figure 12. GEM AI Neural Network fit to corrected scatter with feature points.
1D Anscombes
The Anscombe's Quartet [10] is a set of four different point distributions that have the same statistical best-fit lines and standard deviations.
These datasets were combined into a single training set, but several entries were left blank to make the combination possible. GEM was instructed to fill in the unknowns.
The following image shows the GEM fit using a high tolerance. All four lines are essentially equivalent and correspond to the best-fit linear regressions.
Figure 13. GEM AI Neural Network fit to the Asncombe's Quartet with high tolerance.
The following image shows the GEM fit using a low tolerance. All four curves show the respective differences of the point distributions.
Figure 14. GEM AI Neural Network fit to the Asncombe's Quartet with low tolerance.
1D Parabola
One use of neural networks is for function approximation. The following image shows a GEM fit to evenly spaced points on a parabola.
Figure 15. GEM AI Neural Network fit to a parabola.
The following image shows that only 3 feature points are required to interpolate and extrapolate all the points on a parabola.
Figure 16. GEM AI Neural Network with feature points fit to a parabola.
2D Cone
The following image shows a GEM surface fit to random points on a cone. GEM has similar performance in lower dimensions as well as higher dimensions.
Figure 17. GEM AI Neural Network fit to a cone.
2D Logic Functions
The following image shows interpolation properties of GEM when fit to logic functions, such as AND, OR, and XOR.
Figure 18. GEM AI Neural Network fit to 2D logic functions.
Conclusion
GEM can solve problems in statistics, predictive analytics, process control, optimization, engineering design, matrix operations, signal and image processing, machine learning, and AI. It provides solutions orders of magnitude faster and better than existing systems or methods, at high accuracy, especially with automatic data correction. GEM was implemented in GpuScript, free and open source [11]. GEM is available as a GpuScript library on a case-by-case basis.
GEM is not just the end of AI, but the end of numerous other fields as well. GEM is the end of AI as we now know it, but it is not the end of mathematics. GEM is a new beginning. GEM is a paradigm shift in mathematics and will revolutionize the world.
References
[1] Planck's principle. https://en.wikipedia.org/wiki/Planck%27s_principle
[2] John McCarthy, American mathematician and computer scientist https://www.britannica.com/biography/John-McCarthy
[3] The First Ever AI Chatbot: ELIZA (1966) https://youtu.be/8jGpkdPO-1Y
[4] How three MIT students fooled the world of scientific journals https://news.mit.edu/2015/how-three-mit-students-fooled-scientific-journals-0414
[5] AI could replace equivalent of 300 million jobs - report https://www.bbc.com/news/technology-65102150?no_head=1
[6] Neuronal Arithmetic https://www.nature.com/articles/nrn2864
[7] How the Brain Can Rewire Itself After Half of It Is Removed https://www.nytimes.com/2019/11/19/health/brain-removal-hemispherectomies-scans.html
[8] Boosting https://en.wikipedia.org/wiki/Boosting_(machine_learning)
[9] Empirical Dynamic Modeling: Explaining Empirical Dynamic Modelling using Verbal, Graphical and Mathematical Approaches
[10] Anscombe's Quartet https://en.wikipedia.org/wiki/Anscombe%27s_quartet
[11] GpuScript https://github.com/Alan-Rock-GS/GpuScript
Top comments (0)