<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alan Rock</title>
    <description>The latest articles on DEV Community by Alan Rock (@alan_rock_gs).</description>
    <link>https://dev.to/alan_rock_gs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alan_rock_gs"/>
    <language>en</language>
    <item>
      <title>Geometric Empirical Modeling: The End of AI</title>
      <dc:creator>Alan Rock</dc:creator>
      <pubDate>Mon, 06 Jan 2025 14:46:10 +0000</pubDate>
      <link>https://dev.to/alan_rock_gs/geometric-empirical-modeling-the-end-of-ai-527</link>
      <guid>https://dev.to/alan_rock_gs/geometric-empirical-modeling-the-end-of-ai-527</guid>
      <description>&lt;h2&gt;
  
  
  Abstract
&lt;/h2&gt;

&lt;p&gt;Geometric Empirical Modeling (GEM) is a new branch of non-linear mathematics for solving a wide variety of problems better and faster than any existing methods, including statistics, linear regression, predictive analytics, linear algebra, function approximation, optimization, inversion, process control, engineering design, neural networks, artificial intelligence, and machine learning. This paper is a brief introduction to GEM capabilities, performance, and applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Mathematics progresses in steps, with a major advancement occurring every few centuries or so. For example, Analytical Geometry (Calculus) provided solutions to a whole host of problems that were difficult or impossible to solve before the advancement. GEM is at a similar scale, providing simple solutions to a wide variety of problems that are currently difficult or impossible to solve.&lt;/p&gt;

&lt;p&gt;Imagine that the Fourier transform did not exist, but someone discovered that it was possible to approximate a signal by combining cosine functions with different frequencies, phases, and amplitudes. Experts could examine a signal, determine an educated guess for which frequencies the signal might contain, then iteratively modify the phase and amplitude of each frequency until arriving at a rough approximation of the desired signal after months of computation. Then, suppose someone like Fourier came along with the ability to compute a perfect solution to any signal in seconds. Would his solution be instantly accepted and implemented, or would he face decades of skepticism and criticism, just as Isaac Newton experienced after developing Calculus? This is the problem with all paradigm shifts, especially in science, mathematics, and programming. "A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Max Planck. [1]&lt;/p&gt;

&lt;p&gt;Suppose the only way to solve for the x vector in Ax=b would be iteratively modify x and multiplying A * x until reaching a rough approximation for b. Then suppose someone developed LU Decomposition with Back-Substitution and could invert A in O(N^2) with no iterations and compute a perfect solution for x in seconds. Then GEM comes along and can solve for x in O(1), instantaneously. If A is non-singular, Singular Value Decomposition (SVD) can solve for x in O(N^3). Again, GEM can find the same solution for x in O(1), instantaneously.&lt;/p&gt;

&lt;p&gt;Incredible? Unbelievable? Consider Neural Networks and Deep Learning. &lt;br&gt;
Imagine building a complex neural network with hundreds of hidden layers, thousands of parameters, and millions of training examples. Imagine not using a hundred-billion-dollar AI Factory, but a single laptop with a single GPU. Imagine computing a perfect zero-error or best-fit solution on the first attempt in a single GPU function call in milliseconds instead of months on a supercomputer. Imagine developing a neural network that closely mimics the structure, function, and appearance of a biological neural network, using neurons with non-linear combination of inputs, with thousands of neurons per layer, with thousands of hidden layers structured in two-hemispheres that not only "learns", but "thinks", using modern conventional terminology. Enter the world of Geometric Empirical Modeling — a mathematical technique for solving neural networks in closed-form, as accurate, efficient, and direct as solving a matrix or a Fourier transform. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f0g3hel4bjwuckltu7o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f0g3hel4bjwuckltu7o.jpg" alt="Image description" width="800" height="477"&gt;&lt;/a&gt;Figure 1. AI Neural Network generated using GEM&lt;/p&gt;

&lt;h2&gt;
  
  
  Relevant History of AI
&lt;/h2&gt;

&lt;p&gt;AI was coined in the late 1950’s by John McCarthy, a mathematician and computer scientist. A few years later he created the programming language LISP. [2] &lt;/p&gt;

&lt;p&gt;AI took a turn in the late 1960’s with the development of ELIZA at MIT, in just 200 lines of code. People spent hours talking to the program, convinced they were speaking to an actual person, even after being shown the code. [3] This set the stage for defining AI as a way to give computers the appearance of human intelligence. &lt;/p&gt;

&lt;p&gt;ELIZA proved that even a simple AI program could be very convincing and effective. Perhaps this was the inspiration for a college prank, again from MIT, where three students took ELIZA to the next level with higher speed computing and more sophisticated programming. [4] These students wrote a program that stitched together key words and phrases to automatically generate technical papers complete with charts, diagrams, tables, equations, and references. The papers appeared to be very sophisticated but were actually nothing more than word-salad. The papers passed through peer review and the editors and were published in prestigious journals. The students were reprimanded, but not before making a mockery of the entire scientific publication process.&lt;/p&gt;

&lt;p&gt;Large Language Models (LLMs) and ChatGPT chatbots were the next logical progression in AI. If distinguished journals could be tricked, how about the general public, investors, corporations, and governments? Extravagant claims and hype made matters worse, such as promising to replace 300 million full-time jobs. [5] Glorified autocompletion and text-stitching was touted as sentient, capable of thought and reasoning. Errors were branded as hallucinations, another form of thought. These systems were capable of passing a multiple-choice bar exam, while at the same time failing out of pre-school. &lt;/p&gt;

&lt;p&gt;Hype, although highly misleading, is entertaining, mysterious, and attracts both attention and funding. The hope is that some miracle will come along that will make those exaggerated claims and promises a reality. When this doesn't happen, disillusionment sets in, and everyone hops on the next bandwagon to come along. &lt;/p&gt;

&lt;p&gt;AI Neural Networks have their share of exaggeration and hype. AI Neural Networks claim to be numerical models of the human brain. This claim increases expectations that one day AI Neural Networks will surpass human intelligence, if they have not done so already. This claim explains why AI Neural Networks are so inefficient to train, error-prone, inaccurate, and may never learn a given problem at all. Since we do not fully understand how the human brain works, we also cannot fully understand how AI Neural Networks learn and provide answers. This claim eliminates all potential to understand AI Neural Networks. If a mathematician discovers a solution for how to instantly build and train AI Neural Networks, this can be dismissed out of hand. Claiming to understand AI Neural Networks implies unlocking the secrets of the human brain, adding more fuel to the fire of high expectations, exaggeration, and hype. That is not the intention of this paper. Although claims in this paper are reminiscent of a paradigm shift, the claims are real. There are no unvalidated exaggerations or hype.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Artificial Neural Networks are used in Practice
&lt;/h3&gt;

&lt;p&gt;Although rarely discussed, this is how neural networks are used to solve problems in supervised learning, such as function approximation.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate a training set of inputs and outputs x =&amp;gt; y, where x is the independent vector, and y is dependent, or caused by, x &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make sure to put all inputs in x and all outputs in y. This is not always trivial or even possible. For example, does heart disease cause diabetes, or does diabetes cause heart disease, or are they both caused by something else, do they cause effects in other factors, or are they caused by other factors, or are they completely unrelated? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove all correlations. For example, if weight and BMI (body mass index) are both inputs, they are correlated. If weight increases, then BMI should also increase. So, either replace BMI with height, or remove weight. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove all training examples with unknowns, errors, and outliers. What is an error? If a person’s age is recorded as 21, but the age is actually 21.8, is that an error? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove inputs and outputs that are extraneous, random, or constant. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduce dimensionality. For example, if there are 3 inputs for low, medium, and high, combine these into a single number as 0, 0.5, and 1. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduce non-linearity. Use pre-processing, transforms, or other computations to make non-linear data more linear and monotonic. Neural networks can get stuck and can fail to learn non-linear data. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Determine the number of hidden layers and the number of nodes in each layer.
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Should each layer have the same number of nodes, or different? &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there are too many layers and nodes, interpolation may wildly oscillate, referred to as over-fitting. If there are too few, the output surface may be too linear or flat, referred to as under-fitting. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since most neural networks are not adaptive, or don’t have the ability to add or remove layers and nodes during training, the number of layers and nodes must be predetermined. This is a trial-and-error process that may require several attempts and thorough testing for each attempt. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Determine a learning rate. &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Most neural networks do not have an adaptive learning rate, so this must be pre-determined and adjusted by experiment. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Too high and convergence will be unstable, and training will have to be restarted. Too low and convergence will take too long. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Train &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This requires numerous iterations, back-propagating errors to adjust weights and offsets.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training frequently does not converge to an adequate solution, referred to as a local minimum. This means starting over from step 1. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Test and evaluate the solution &lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Check the accuracy at the training examples. Training almost never maps the training example inputs to outputs with zero error or with statistical best-fit surfaces. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Check interpolation accuracy. A separate test set is used to evaluate how well the network performs on examples it has not encountered. Care must be taken to ensure there are no errors in the testing data. Testing data must also be extensive to ensure good coverage, especially in critical regions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Training never converges to a zero-error or best-fit solution. This means that solutions to function approximation will never match the accuracy of equations, or problems that can be solved with statistics and linear regression. The evaluation is usually considered successful if any good performance can be detected at any location. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If evaluation is not successful or there is potential for improvement, start over from step 1. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developing neural networks requires considerable time and effort to formulate the problem, train, and test. Results are relatively poor for function approximation or statistical analysis. Success is rare and often exaggerated.  &lt;/p&gt;

&lt;p&gt;However, these challenges and disadvantages are easily dismissed. Neural networks model the human brain. People require considerable time to learn, or may never learn how to solve certain problems or perform certain tasks. People make mistakes. Computers are very good at math and computation, but people are not. No one understands how the brain works, and no one can explain how artificial neural networks solve problems either.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Need for AI
&lt;/h3&gt;

&lt;p&gt;LLMs and chatbots are not the entirety of AI. There are other branches of AI with different goals and capabilities, that come with their own sets of advantages and disadvantages.&lt;/p&gt;

&lt;p&gt;These types of AI have developed due to a need for improved analysis and predictive analytics of real-world data, beyond the current capabilities of statistics. There is a need for multi-variate empirical function approximation with corresponding function optimization. New mathematics are required that go beyond current capabilities in matrix operations, numerical modeling, process control, signal analysis, and engineering design.&lt;/p&gt;

&lt;p&gt;AI Neural Networks were developed, not so much as a simulation of biological neural networks, but as a new approach to solving problems and achieving superior solutions. Although AI Neural Networks are extremely inaccurate, inefficient and impractical, they have gained popularity by obtaining crude solutions to multi-variate non-linear empirical problems. In other words, AI Neural Networks can analyze and model real-world data better than any other existing technology.&lt;/p&gt;

&lt;p&gt;What is now needed is an accurate, efficient, and practical method for analyzing real-world data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structure of a GEM AI Neural Network
&lt;/h2&gt;

&lt;p&gt;GEM is a mathematical technique that can, among other things, build and solve an AI Neural Network given a set of empirical examples and a desired error tolerance. The structure includes nodes, links, layers, connection weights, activation functions, and neuron offsets, achieving a zero-error or best-fit solution with no training. Although the same structure could be presented in other mathematical terms such as a non-linear combination of vectors using basis functions, the terminology of the AI Neural Network has been preserved as it is excellent for real-time multi-variate data analysis and modeling. However, defining this structure in purely mathematical terms has several advantages, as it avoids the mysterious nature of biological neural networks. Unlike biological neural networks, AI Neural Networks can be fully understood, described, and directly solved with mathematics. This opens up an entire field of mathematics and provides new approaches to problem solving and modeling beyond existing methods.&lt;/p&gt;

&lt;p&gt;Artificial neural networks are thought to mimic the structure and function of biological networks. Although biological neural network structure is complex, many aspects of the basic structure of biological neural networks are being explored and new insights are gained with further research. For example, biological neurons are far more complicated than a simple summation of input signals coupled with a threshold. “Recent studies show that individual neurons utilize a wealth of nonlinear mechanisms to transform synaptic input into output firing.” [6] Neurons have mechanisms to not only sum input signals, but also multiply them and combine them using a variety of non-linear operations.&lt;/p&gt;

&lt;p&gt;Biological neural networks not only have small-scale structure, such as individual neurons and synapsis connections, but also large-scale structures, such as a partially symmetric two-hemisphere structure as seen in mammals, reptiles, and birds. GEM AI neural networks employ this left and right brain structure, as well as non-linear combinations of neuron input signals, as necessary components for instantaneous construction, training, learning, generalization, interpolation, extrapolation, and thinking. &lt;/p&gt;

&lt;p&gt;GEM AI neural networks are deep neural networks, in the sense that they consist of thousands of concurrent hidden layers. Each layer is constructed and trained sequentially, but all layers operate simultaneously. On a GPU, thousands of layers require the same computation time as a single layer. Each layer provides increasing complexity, similar to the structure of a Fourier Transform with frequencies extending from DC-offset to slightly higher and higher frequencies.&lt;/p&gt;

&lt;p&gt;Another interesting aspect of GEM AI neural networks is that the neural network size and structure is largely independent of the training set size and complexity. A training set with one input, one output, and two training examples will result in essentially the same neural network size and structure as a training set with hundreds of inputs and outputs and millions of training examples. This is similar to a Fourier Transform. Regardless of the number of frequencies contained in a signal, the resulting transform remains essentially the same.&lt;/p&gt;

&lt;p&gt;Biological neural networks can still function even when large portions of the brain are removed [7]. GEM has the interesting property that it still functions even when up to 90% of the structure is removed. The basic trends remain, but accuracy drops with removal until all outputs converge to an average value. Each concurrent layer reduces error, and layers are added until reaching machine precision accuracy. This process is similar to Boosting [8].&lt;/p&gt;

&lt;p&gt;The neural network size and structure is also largely independent of the neuron activation function. The activation function may be a step, linear, sigmoid, tanh, Gaussian, or even sines and cosines, and the GEM AI neural network will have approximately the same size and structure and generate approximately the same results.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM Training
&lt;/h2&gt;

&lt;p&gt;Training refers to the construction of the entire network structure, including number of hidden layers, size of each hidden layer, all connection weights and offsets, and the non-linear combination of neuron inputs with activation functions that result in optimal interpolation and extrapolation properties. Conventional AI Neural Networks often require trial-and-error to find the number of hidden layers and the size of each hidden layer that results in a solution with an acceptable fit to the training set. Connection weights and offsets are determined using a type of back-propagation gradient descent algorithm with an optimal learning rate that usually requires multiple iterations and may suffer from local minima. Conventional AI Neural Networks require careful data representation and organization of the training set to reduce non-linearities, dimensionality, input correlations, and variance clusters to speed up convergence, avoid local minima, and simplify the problem so that the neural network can provide adequate results.&lt;/p&gt;

&lt;p&gt;Training GEM AI neural networks requires no trial-and-error, no learning rate, no iterations, no fitting analysis, no local minima avoidance methods, and no data representation tricks. GEM instantaneously computes the AI Neural Network solution from a training set, similar to how a Fourier Transform instantaneously computes the amplitude and phase of each frequency given samples from a signal.&lt;/p&gt;

&lt;p&gt;GEM training can result in solutions at machine precision error, or solutions with best-fit surfaces for data with scatter. The fit is controlled by a tolerance factor specifying allowable error.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM Thinking
&lt;/h2&gt;

&lt;p&gt;Thinking is analogous to functional inversion, the determination of inputs resulting in weighted constraints on other inputs and outputs. Training is: "Given x, solve for y". Thinking is: "Given y, solve for x". The results of thinking may or may not be unique, so constraints may be added to locate a unique solution, such as "Given y, solve for the minimum x".&lt;/p&gt;

&lt;p&gt;Thinking has a variety of applications in function optimization, process control, and matrix operations. For example, consider the matrix equation Ax=b. GEM can be trained with a set of (x=&amp;gt;b) training examples, so that given any x, GEM can generate the correct b, without performing an Ax multiplication. Thinking allows GEM to generate x given a desired b, without performing a matrix inversion. If the solution is not unique, thinking allows GEM to generate the minimum x given a desired b, which results in the same solution as singular value decomposition (SVD). The difference between GEM and matrix operations is in the computation time. Matrix multiplication is O(N^2), but GEM can generate a solution in O(1). Matrix inversion and singular value decomposition are O(N^3), but GEM is O(1). This means that GEM can perform matrix multiplication of a 2048x2048 A matrix with a speedup of 32 million times, and perform singular value decomposition of the same matrix with a speedup of 140 billion times, including training time.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM Data Correction
&lt;/h2&gt;

&lt;p&gt;GEM has support for automatic data correction to fill in missing entries, detect and correct outliers (data entry errors or lie detection), fix jitter caused by roundoff or scatter, and optimization to find the minimum number of training examples required to interpolate and extrapolate the entire training set. Cleaning up and repairing both input and output data can significantly improve accuracy, remove skewing effects, improve optimization, and extract more information from the data. &lt;/p&gt;

&lt;p&gt;This capability is not feasible without instantaneous training and thinking. Data correction requires rebuilding and retraining the neural network multiple times and using inversion to determine which inputs or outputs require correction. This can be a time-consuming process even with rapid training and thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM Predictive Analytics
&lt;/h2&gt;

&lt;p&gt;Prediction can be complex, especially when multiple correlated inputs are involved. For example, as a person ages, his weight, blood sugar, and blood pressure may increase, decrease, or stay the same. However, certain changes are more likely than others. GEM determines the direction and magnitude of the change by fitting a multi-dimensional surface through a large number of data points. Once an accurate surface is generated, prediction consists of moving along the gradient, either up or down, depending on the direction of travel. &lt;/p&gt;

&lt;p&gt;Constraints on inputs must also be considered. For example, blood pressure and pulse may be allowed to change with age, but gender can be set as a constant. Height and number of children can be constrained to be monotonically increasing.&lt;/p&gt;

&lt;p&gt;GEM can predict both forward and backward in time. Accuracy of unknown future predictions can be compared to the accuracy of estimating the past. For example, if the model can travel backward in time and accurately estimate weight, blood sugar levels, and blood pressure, before and after child-birth, and estimate the age of the woman at each child-birth, then future predictions of weight, blood sugar, blood pressure, and possible future child-births can be trusted.&lt;/p&gt;

&lt;p&gt;The following figure shows how predictive analytics works on a 2D example. If X increases and Y is low, then Y and Z will increase, as shown in image (a) on the lower left. If X increases and Y is high, then Y will increase &lt;br&gt;
 and Z will decrease, as shown in image (a) on the upper left. However, when in the exact center, Y may either decrease to the front or increase to the back with equal probability. The direction is deterministic when slightly off-center. The prediction behavior is similar when Y increases, as shown in figure (b). Predicting backward in time would be analogous to X or Y decreasing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw5rsi52fv3m16fiaa7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw5rsi52fv3m16fiaa7q.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;Figure 2. Possible prediction directions on an XOR surface at different locations and gradients. The left image (a) shows direction of travel when X increases. The right image (b) is when Y increases.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM vs EDM
&lt;/h2&gt;

&lt;p&gt;Empirical Dynamic Modeling (EDM) is a popular method used for forecasting and predictive analytics [9]. EDM is a step above a table lookup. Instead of simply using the closest point value in a table, EDM finds a set of nearby points and performs a linear simplex interpolation using only those points. Instead of a stepwise functional fit, EDM generates a piece-wise linear fit, with discontinuities in derivatives when moving from one set of nearby points to another. &lt;/p&gt;

&lt;p&gt;GEM pre-computes the function approximation and can generate an exact fit or a statistical non-linear best-fit through all the data points. GEM includes all the points and performs both interpolation and extrapolation, providing higher speed and superior surface fitting.&lt;/p&gt;

&lt;p&gt;EDM seems to have an advantage because only a few nearby points are required for each time-step evaluation, saving considerable memory. However, all the points must still be stored on disk and accessed when searching for nearby points. &lt;/p&gt;

&lt;p&gt;GEM has the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact fit or smooth best-fit interpolation and extrapolation, depending on data error tolerance.&lt;/li&gt;
&lt;li&gt;Smooth interpolation and extrapolation, with peaks and troughs of correct shape and location for function minimization or maximization.&lt;/li&gt;
&lt;li&gt;Determines the minimum number of points that can interpolate and extrapolate all the other points, to omit unnecessary points.&lt;/li&gt;
&lt;li&gt;Detects and corrects errors in the data, such as outliers, unknowns, scatter, round-off, or jitter.&lt;/li&gt;
&lt;li&gt;Closed-form mathematical solution in Order(1) for instantaneous computation on the GPU.&lt;/li&gt;
&lt;li&gt;Correctly handles correlated inputs for predictive analytics.&lt;/li&gt;
&lt;li&gt;Supports empirical inversion, or the ability to determine inputs that give desired outputs. For example, GEM could find the input that results in the maximum value in the following figure:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86sk1vfg9q0fenwnp58r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86sk1vfg9q0fenwnp58r.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;Figure 3. Interpolation Comparison of Table Lookup, EDM, and GEM. GEM will generate a peak at the correct location, as shown with the red arrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  GEM Case Studies
&lt;/h2&gt;

&lt;p&gt;Proof by induction is a simple and direct method for evaluating the performance of a neural network. Analyze the performance in one-dimension, two-dimensions, and then three-dimensions, and the rest will follow. GEM supports data with thousands to millions of dimensions, but these cases are not included to reduce complexity and avoid hype. The purpose of these case studies is to demonstrate the capabilities and superior performance of GEM on simple and straightforward problems that are beyond the capabilities of the most advanced algorithms in statistics, linear algebra, linear regression, process control, AI, and machine learning. Proof by induction guarantees that GEM will continue to have superior performance at higher dimensions as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Line
&lt;/h3&gt;

&lt;p&gt;Fitting a straight line through two points is trivial for linear regression but is a very challenging problem for an AI Neural Network, especially when using non-linear activation functions. The poor performance of AI Neural Networks and Deep Learning is one reason this simple training set is rarely published or discussed in the literature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dzgbxvyjisdr3vtfft1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dzgbxvyjisdr3vtfft1.jpg" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;Figure 4. GEM AI Neural Network line fit through two points with interpolation and extrapolation&lt;/p&gt;

&lt;p&gt;The above image shows how GEM performs when given two points. It has zero error at the training points and near perfect linear interpolation and extrapolation. A large neural network structure with nearly a thousand hidden concurrent layers is required, as it is very challenging to fit a perfectly straight line through two points using non-linear sigmoid activation functions.&lt;/p&gt;

&lt;p&gt;Despite the large size of 918 concurrent hidden layers, 1838 nodes, and 3676 links, the neural network requires 4 milliseconds to build and train on the GPU and has an evaluation time of 10 nanoseconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Curve
&lt;/h3&gt;

&lt;p&gt;GEM supports any number of inputs and outputs within memory constraints. In this case, there is one input and two outputs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ld7h270yqua8ac182nb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ld7h270yqua8ac182nb.jpg" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;Figure 5. GEM AI Neural Network fitting curves through two sets of points. Unknown values are displayed in the data grid in yellow. Originally these values were blank in the data.&lt;/p&gt;

&lt;p&gt;This case study shows the accuracy of filling in unknowns given a single input and two outputs. The accuracy of filling in unknowns and correcting outliers increases with the number of inputs and outputs. Even when only a single value in a training example is known, the other values can be determined with surprising accuracy, even when one-third of input and output entries are unknown, as shown above.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Outliers
&lt;/h3&gt;

&lt;p&gt;Outliers can skew or otherwise corrupt results, so they should either be removed or corrected. Removing outliers may result in loss of valuable data. GEM can automatically detect and correct outliers, thus preserving information in other inputs and outputs that may be correct.&lt;/p&gt;

&lt;p&gt;Outliers not only affect training, they can also affect testing. Outliers in testing data can result in rejection of a correctly trained AI Neural Network, or accepting an incorrectly trained AI Neural Network that by chance correctly matches an outlier.&lt;/p&gt;

&lt;p&gt;Outliers may be the result of data entry errors, uncorrected unknown or missing values, deceptive answers to survey questions, or measurement errors. Correcting outliers can significantly improve AI Neural Network accuracy, predictions, and data analysis, depending on the amount of data and the application.&lt;/p&gt;

&lt;p&gt;The following image considers the case when apparent outliers should be taken into serious consideration. Using a low tolerance, GEM will attempt to fit a surface through all training points, if possible. This may result in some small oscillations in regions near the apparent outliers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azeufnqg7pl2lglsih4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azeufnqg7pl2lglsih4.jpg" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;Figure 6. GEM AI Neural Network fitting a curve through outliers at zero error.&lt;/p&gt;

&lt;p&gt;The following image shows the fit when GEM uses a high tolerance. The outliers are effectively ignored, but small skewing is still apparent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubhjbqmc4lvudopii9uw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubhjbqmc4lvudopii9uw.jpg" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;Figure 7. GEM AI Neural Network fitting a curve through outliers at high tolerance.&lt;/p&gt;

&lt;p&gt;The following image shows automatic detection and correction of outliers, shown in red. GEM can be instructed to correct the outlier inputs with a higher priority than the outputs, or vice versa. With more inputs and outputs, GEM will make the least number of corrections required, which means that it can determine which inputs or outputs are incorrect and make the proper corrections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l4h7av6vlb8jkqlm3r5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l4h7av6vlb8jkqlm3r5.jpg" alt="Image description" width="800" height="454"&gt;&lt;/a&gt;Figure 8. GEM AI Neural Network correcting outliers.&lt;/p&gt;

&lt;p&gt;The following image shows automatic feature point determination in the presence of outliers. Only the end-points are required to interpolate and extrapolate all the points in the set.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs9o6sewjm5he1v037lq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs9o6sewjm5he1v037lq.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 9. GEM AI Neural Network with feature points, after outlier corrections.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Scatter
&lt;/h3&gt;

&lt;p&gt;Small errors within the error tolerance are referred to as scatter, or jitter. Jitter can be caused by round-off or small measurement errors, and may also be the result of missing variables or low dimensionality. Jitter, like outliers, can skew results. GEM can correct jitter and significantly reduce its effects.&lt;/p&gt;

&lt;p&gt;The following image shows the GEM fit using a low tolerance. GEM will fit a curve through all the points with minimum error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojx9i5cg074oq6wdl674.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojx9i5cg074oq6wdl674.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 10. GEM AI Neural Network fit to scatter with low tolerance.&lt;/p&gt;

&lt;p&gt;The following image shows the GEM fit using a high tolerance. This line corresponds to the statistical linear-regression best-fit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5s0gfy9jrd8isnct454.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5s0gfy9jrd8isnct454.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;Figure 11. GEM AI Neural Network fit to scatter with high tolerance.&lt;/p&gt;

&lt;p&gt;The following image shows corrected jitter and the selected feature points.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbaqov7o1f1e2wlo9zirk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbaqov7o1f1e2wlo9zirk.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 12. GEM AI Neural Network fit to corrected scatter with feature points.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Anscombes
&lt;/h3&gt;

&lt;p&gt;The Anscombe's Quartet [10] is a set of four different point distributions that have the same statistical best-fit lines and standard deviations.&lt;/p&gt;

&lt;p&gt;These datasets were combined into a single training set, but several entries were left blank to make the combination possible. GEM was instructed to fill in the unknowns.&lt;/p&gt;

&lt;p&gt;The following image shows the GEM fit using a high tolerance. All four lines are essentially equivalent and correspond to the best-fit linear regressions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnzr3wz7sza8qbtl6r3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnzr3wz7sza8qbtl6r3.jpg" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;Figure 13. GEM AI Neural Network fit to the Asncombe's Quartet with high tolerance.&lt;/p&gt;

&lt;p&gt;The following image shows the GEM fit using a low tolerance. All four curves show the respective differences of the point distributions. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkvabtmxu8rkznz1zng4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkvabtmxu8rkznz1zng4.jpg" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;Figure 14. GEM AI Neural Network fit to the Asncombe's Quartet with low tolerance.&lt;/p&gt;

&lt;h3&gt;
  
  
  1D Parabola
&lt;/h3&gt;

&lt;p&gt;One use of neural networks is for function approximation. The following image shows a GEM fit to evenly spaced points on a parabola.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftntomhlq4u9v1316ltpy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftntomhlq4u9v1316ltpy.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 15. GEM AI Neural Network fit to a parabola.&lt;/p&gt;

&lt;p&gt;The following image shows that only 3 feature points are required to interpolate and extrapolate all the points on a parabola.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82aw8cwy43tfv6ekpcjy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82aw8cwy43tfv6ekpcjy.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 16. GEM AI Neural Network with feature points fit to a parabola.&lt;/p&gt;

&lt;h3&gt;
  
  
  2D Cone
&lt;/h3&gt;

&lt;p&gt;The following image shows a GEM surface fit to random points on a cone. GEM has similar performance in lower dimensions as well as higher dimensions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti8d4pkt1af202gzyxhu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fti8d4pkt1af202gzyxhu.jpg" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;Figure 17. GEM AI Neural Network fit to a cone.&lt;/p&gt;

&lt;h3&gt;
  
  
  2D Logic Functions
&lt;/h3&gt;

&lt;p&gt;The following image shows interpolation properties of GEM when fit to logic functions, such as AND, OR, and XOR.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gohh5dni0vrz46v1jfn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gohh5dni0vrz46v1jfn.jpg" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;Figure 18. GEM AI Neural Network fit to 2D logic functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GEM can solve problems in statistics, predictive analytics, process control, optimization, engineering design, matrix operations, signal and image processing, machine learning, and AI. It provides solutions orders of magnitude faster and better than existing systems or methods, at high accuracy, especially with automatic data correction. GEM was implemented in GpuScript, free and open source [11]. GEM is available as a GpuScript library on a case-by-case basis.&lt;/p&gt;

&lt;p&gt;GEM is not just the end of AI, but the end of numerous other fields as well. GEM is the end of AI as we now know it, but it is not the end of mathematics. GEM is a new beginning. GEM is a paradigm shift in mathematics and will revolutionize the world. &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;[1] Planck's principle. &lt;a href="https://en.wikipedia.org/wiki/Planck%27s_principle" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Planck%27s_principle&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] John McCarthy, American mathematician and computer scientist &lt;a href="https://www.britannica.com/biography/John-McCarthy" rel="noopener noreferrer"&gt;https://www.britannica.com/biography/John-McCarthy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] The First Ever AI Chatbot: ELIZA (1966) &lt;a href="https://youtu.be/8jGpkdPO-1Y" rel="noopener noreferrer"&gt;https://youtu.be/8jGpkdPO-1Y&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4] How three MIT students fooled the world of scientific journals &lt;a href="https://t.co/yl8ayWaXi4" rel="noopener noreferrer"&gt;https://news.mit.edu/2015/how-three-mit-students-fooled-scientific-journals-0414&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[5] AI could replace equivalent of 300 million jobs - report &lt;a href="https://www.bbc.com/news/technology-65102150?no_head=1" rel="noopener noreferrer"&gt;https://www.bbc.com/news/technology-65102150?no_head=1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[6] Neuronal Arithmetic &lt;a href="https://www.nature.com/articles/nrn2864" rel="noopener noreferrer"&gt;https://www.nature.com/articles/nrn2864&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[7] How the Brain Can Rewire Itself After Half of It Is Removed &lt;a href="https://www.nytimes.com/2019/11/19/health/brain-removal-hemispherectomies-scans.html" rel="noopener noreferrer"&gt;https://www.nytimes.com/2019/11/19/health/brain-removal-hemispherectomies-scans.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[8] Boosting &lt;a href="https://en.wikipedia.org/wiki/Boosting_(machine_learning)" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Boosting_(machine_learning)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[9] Empirical Dynamic Modeling: &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11094587/" rel="noopener noreferrer"&gt;Explaining Empirical Dynamic Modelling using Verbal, Graphical and Mathematical Approaches&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[10] Anscombe's Quartet &lt;a href="https://en.wikipedia.org/wiki/Anscombe%27s_quartet" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Anscombe%27s_quartet&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[11] GpuScript &lt;a href="https://github.com/Alan-Rock-GS/GpuScript" rel="noopener noreferrer"&gt;https://github.com/Alan-Rock-GS/GpuScript&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>analytics</category>
    </item>
    <item>
      <title>GpuScript: C# is no longer just for the CPU.</title>
      <dc:creator>Alan Rock</dc:creator>
      <pubDate>Wed, 18 Dec 2024 22:45:54 +0000</pubDate>
      <link>https://dev.to/alan_rock_gs/gpuscript-c-is-no-longer-just-for-the-cpu-2g52</link>
      <guid>https://dev.to/alan_rock_gs/gpuscript-c-is-no-longer-just-for-the-cpu-2g52</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;GpuScript allows a software developer to program and debug the GPU, turning a single laptop with any GPU into a supercomputer. GpuScript can increase software development productivity by 50 times and make programs run a million times faster, depending on the application. GpuScript is open source, free, and requires 30 minutes to learn for a typical C# programmer.&lt;/p&gt;

&lt;h2&gt;
  
  
  GpuScript Performance
&lt;/h2&gt;

&lt;p&gt;A popular benchmark of various computer languages was made for entertainment purposes &lt;a href="https://x.com/BenjDicken/status/1861072804239847914" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The following figure shows a comparison with GpuScript, which was 100,000 times faster than C, and over 10 million times faster than Python. Python/PyPy can incorporate GPU acceleration using vectorization and run 1000 times faster than Python without GPU acceleration, obtaining a computation time 10 times faster than C, but still 10,000 times slower than GpuScript. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uuiudcdlq6zfmqdao9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uuiudcdlq6zfmqdao9z.png" alt="One Billion Loop Language Comparison" width="472" height="284"&gt;&lt;/a&gt;&lt;br&gt;
One Billion Nested Loop Language Comparison in μs.&lt;/p&gt;

&lt;p&gt;How can GpuScript achieve such higher performance than other languages that use GPU acceleration? Just because a language or a program reports using GPU acceleration does not mean that it is utilizing the GPU to its full extent. GpuScript usually only utilizes 20% of the GPU potential, and multiple processes using the GS Cloud running on the same computer are required to take full advantage of both CPU and GPU cores. However, using a single process is still several orders of magnitude faster than other languages using GPU acceleration.&lt;/p&gt;

&lt;p&gt;GpuScript accomplishes high speeds in several ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimizing CPU/GPU memory transfers and transferring memory only when necessary&lt;/li&gt;
&lt;li&gt;Allowing more code to be ported from the CPU to the GPU&lt;/li&gt;
&lt;li&gt;Reducing GPU calls by utilizing Group Shared Memory and Intrinsic Functions when possible&lt;/li&gt;
&lt;li&gt;Allowing programs to be designed specifically for the GPU rather than relying on GPU acceleration to speed up code designed to run on the CPU&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GpuScript could be considered a type of GPU acceleration technique, because GpuScript runs entirely in C# on the CPU using the standard .NET CLR on the CPU with no other libraries or extensions other than Unity. GpuScript generates GPU code and simulates, controls, and communicates with the GPU, but GpuScript does not actually run on the GPU.&lt;/p&gt;
&lt;h2&gt;
  
  
  GpuScript is a Language
&lt;/h2&gt;

&lt;p&gt;Haskell is written in LISP. C++ is written in C. GpuScript is written in C#. Although GpuScript is similar to C#, it is actually an amalgamation of several languages, including HLSL, ShaderLab, OpenCL, OpenGL, and CUDA. However, there is no need to learn all these languages. C# is all that is required. Java, Javascript, C++, Python, and other Object-Oriented Programming (OOP) languages are essentially the same, so a working knowledge of any of these languages is all that is required.&lt;br&gt;
GpuScript supports programming and debugging the GPU in OOP, which makes it easier to write large and complex programs that almost run entirely on the GPU. GPU programs are usually difficult to write and debug, so GPU routines are typically small and simple. This requires significant memory transfer between the CPU and GPU, which can be reduced by transferring more workload to the GPU. &lt;/p&gt;
&lt;h3&gt;
  
  
  Enum
&lt;/h3&gt;

&lt;p&gt;Enumerations are not supported in other GPU languages but are fully supported in GpuScript.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;Rate&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Low&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Medium&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;High&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;Rate&lt;/span&gt; &lt;span class="n"&gt;rate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Rate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Low&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Swizzling
&lt;/h3&gt;

&lt;p&gt;Swizzling is a language construct from GPU programming languages that allows reordering or copying vector components. The following code declares three float2 variables: a = float2(0, 1), b = float2(1, 0), and c = float2(1, 1).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="n"&gt;float2&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;xx&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GpuScript also supports a type of swizzling for initializing vector components with -1, 0, and 1. The following code declares three float2 variables: a = float2(0, 1), b = float2(0, 0), and c = float2(-1, 1).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="n"&gt;float2&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f_1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Vector Comparisons
&lt;/h3&gt;

&lt;p&gt;All() and Any() are GPU language constructs for comparing vector components. Operator overloading is used in GpuScript to return a vector when comparing vectors. This allows All() and Any() to be supported the same on both the CPU and GPU.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="n"&gt;float2&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="n"&gt;all_less&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;All&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;//false, a.y &amp;gt; b.y&lt;/span&gt;
  &lt;span class="n"&gt;Bool&lt;/span&gt; &lt;span class="n"&gt;any_less&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;//true , a.x &amp;lt; b.x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  New
&lt;/h3&gt;

&lt;p&gt;GPU languages do not require a "new" keyword when declaring vectors or structs, but C# does. To resolve this difference, GpuScript allows both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;  &lt;span class="n"&gt;float2&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;float2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;float2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inheritance
&lt;/h3&gt;

&lt;p&gt;Methods from the base class and libraries may be overridden by parent classes. This allows library customization to directly access parent class data without costly reorganization of function input or output data structures. &lt;/p&gt;

&lt;h3&gt;
  
  
  Memory Transfer
&lt;/h3&gt;

&lt;p&gt;Transferring memory between the CPU and GPU can be costly. GpuScript keeps track of all CPU and GPU memory access to minimize memory transfer and synchronize CPU and GPU memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intrinsic Functions
&lt;/h3&gt;

&lt;p&gt;GpuScript handles intrinsic interlocked functions similar to GPU languages. Intrinsic functions only work with integer and unsigned integer buffers, but can be very powerful. For example, InterlockedMin can locate the minimum few elements in an array without sorting. InterlockedAdd can determine the sum of an array, perform matrix multiplication, compute an FFT, sum forces in a Distinct Element model, or sum signals in a neural network node in Order(1). Depending on the application, intrinsic functions can achieve several orders of magnitude faster performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Group Shared Memory
&lt;/h3&gt;

&lt;p&gt;GroupSharedMemory / AllMemoryBarrierWithGroupSync are GPU language constructs to reduce global GPU update calls. GpuScript supports full debugging of these constructs, significantly speeding up algorithms such as FFT, AppendBuff, prefix sums, sorting, random numbers, or reduction techniques.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPU Compute Kernels
&lt;/h2&gt;

&lt;p&gt;GPU languages require each kernel to have a thread block declaration, attachment of all buffers used by the kernel and all methods used by the kernel, and that all methods be declared before they are called. This becomes a daunting task as program complexity and size increases. GpuScript handles all these requirements, so the programmer simply writes and organizes code the same as when working with any OOP language. &lt;/p&gt;

&lt;h2&gt;
  
  
  Graphics Shaders
&lt;/h2&gt;

&lt;p&gt;GpuScript redesigns how graphics shaders are designed and used, so they are more similar to calling a normal function. This makes it easier to design large and complex graphics systems, displaying 3D volumetric raymarching, axes, legends, signals, and millions of spheres, arrows, lines, and 3D text all using a single graphics shader.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Duplication
&lt;/h3&gt;

&lt;p&gt;The CPU code, compute shaders, and graphics shaders often use the same data, variables, and methods, resulting in code duplication.  This complicates implementation and debugging. GpuScript hides these details from the programmer, so the programmer only works with a single version of code and data.&lt;/p&gt;

&lt;h2&gt;
  
  
  GpuScript is a Code Generator
&lt;/h2&gt;

&lt;p&gt;GpuScript presents an alternative approach to programming that is highly efficient and productive. GpuScript generates all the boilerplate code for UI, GPU, and CPU, leaving only program critical code to be filled in by the programmer. GpuScript analyzes the code, and creates a checkbox when a boolean is declared, a button when a method is defined, a textbox with a scrollbar when a float is declared, and a grid of UI elements when a class or struct array is declared. GpuScript does the same for GPU compute kernels, buffers, variables, and graphics shaders. The programmer simply writes code with some additional attribute properties, and GpuScript does the rest. The programmer still has complete control to override the generated UI and code, but the time savings in coding and debugging increases productivity by orders of magnitude. This approach also hides coding details from the programmer, making GPU programming easy to learn, efficient to develop, and run at supercomputer speeds. &lt;/p&gt;

&lt;h3&gt;
  
  
  Opposite of Visual Programming
&lt;/h3&gt;

&lt;p&gt;Visual Programming Languages (VPLs) have been a popular style for decades: Visual Basic, Visual C++, Visual C#, Delphi Pascal, LabView, etc. VPLs are highly manual and difficult to automate, increasing UI development time on average. VPLs require considerable screen-space, are difficult to debug, and require several levels of text-based menus and trees for UI settings. VPLs also have a steep learning-curve, especially for experienced programmers. GpuScript is entirely text-based, the exact opposite of VPLs. GpuScript generates UI code by examining program-critical code, thus eliminating the requirement to manually develop UI code. Due to high levels of automation, the developer spends almost no time building the UI. The UI is also highly consistent throughout all applications. This programming approach is not only applied to the UI, but also to the GPU. The paradox is that GpuScript allows the developer to program the GPU without actually writing GPU code. GpuScript generates both UI and GPU code by examining program-critical code. This is the key for how GpuScript achieves high productivity with a small learning curve.&lt;/p&gt;

&lt;p&gt;A note about UI. GpuScript generates a generic UI using UIBuilder and UIElements in Unity. The appearance of the UI can be customized using UXML files. GpuScript can be instructed to generate applications with no UI, allowing the programmer to implement a custom UI if desired. Since GpuScript is open source, the programmer can modify the code that generates the UI, so that GpuScript can automatically generate a completely different custom UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  GpuScript is a GPU Simulator
&lt;/h2&gt;

&lt;p&gt;GpuScript simulates all GPU operations, scalers, vectors, matrices, methods, buffers, group shared memory, sync operations, and threads on the CPU. This allows some of the program to run on the CPU while the rest runs on the GPU, for debugging at full scale. The GPU is much faster than the CPU for many tasks, so this capability is essential for debugging at full scale. &lt;/p&gt;

&lt;p&gt;How is it possible to use the CPU with limited thread-pools to simulate the high number of concurrent threads on the GPU? GpuScript accomplishes this with coroutines, or iterator blocks. This allows a comprehensive GPU simulation to be performed entirely on the main thread, without the need to create multiple CPU threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  GpuScript Implementation
&lt;/h2&gt;

&lt;p&gt;GpuScript is integrated into Unity and consists of several components: an editor window for building projects and libraries, a series of classes and structs for simulating the GPU, an automated and persistent UI, and a set of hierarchical precompiled libraries, both internal and external. GpuScript is implemented in a mostly Functional Programming style using C#, with HLSL macros and functions to make HLSL and ShaderLab conform to C#. GpuScript automates many Unity tasks related to GPU programming, such as creating compute shaders, graphics shaders, Unity materials, and numerous settings and links. &lt;/p&gt;

&lt;h3&gt;
  
  
  CPU and GPU Programming Differences
&lt;/h3&gt;

&lt;p&gt;It's no surprise that serial programming and massively parallel programming styles have differences. Simple loops in CPU code can be parallelized, but speed-ups are minimal. Redesigning CPU code is the only way to fully utilize the GPU. Parallel programming is a different style of programming. Speed-ups can be achieved by running multiple models at the same time, each with different parameters. A single FFT can be designed to run slightly faster, but significant speedups can be achieved by running millions of FFTs at the same time. &lt;/p&gt;

&lt;h3&gt;
  
  
  GpuScript is Open Source
&lt;/h3&gt;

&lt;p&gt;GpuScript is open source and free to use. Unity is also free for individual programmers and small companies. &lt;/p&gt;

&lt;h2&gt;
  
  
  GpuScript Libraries
&lt;/h2&gt;

&lt;p&gt;GpuScript can be expanded and upgraded with available libraries. &lt;/p&gt;

&lt;p&gt;Since the GpuScript libraries are written in GpuScript, the libraries naturally have exceptional performance. &lt;/p&gt;

&lt;h3&gt;
  
  
  AppendBuff
&lt;/h3&gt;

&lt;p&gt;Append buffers are included in most GPU programming languages and are often specially built directly into the GPU firmware. GpuScript includes a library that rivals the speed and memory requirements of GPU append buffers. AppendBuff supports prefix sums as well as append buffers, uses 32 times less memory than append buffers, and does not require prior knowledge of the append buffer size to avoid crashing the computer. Depending on the GPU, append buffers are notorious for giving incorrect results, which AppendBuff fixes.&lt;/p&gt;

&lt;h3&gt;
  
  
  BDraw
&lt;/h3&gt;

&lt;p&gt;GpuScript includes a library for drawing pixels and billboard graphical objects, including spheres, lines, arrows, signals, and 3D text. Billboards are rectangles that rotate to face the camera. A sphere billboard always faces the camera, but lines, arrows, signals, and text rotate along the local x-axis so that the y- and z- axes face the camera. Billboards are rendered with a pixel shader for high-resolution rendering and interact well with other Unity graphical objects and models. Billboards give the appearance of 3D graphics and are very high speed, with the capability to draw tens of millions of billboards at high frame rates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbomu9ag7ig0a6ew2aep0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbomu9ag7ig0a6ew2aep0.png" alt="BDraw Spheres, Arrows, and Text" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
BDraw Spheres, Arrows, and Text&lt;/p&gt;

&lt;h3&gt;
  
  
  OCam_Lib
&lt;/h3&gt;

&lt;p&gt;OCam is a library that includes an orbit camera, multiple camera views, and a legend.&lt;/p&gt;

&lt;h3&gt;
  
  
  View_Lib
&lt;/h3&gt;

&lt;p&gt;View_Lib is a library that can save and load selected settings, such as camera viewing parameters, in a grid for quick access with a short-cut keystroke.&lt;/p&gt;

&lt;h3&gt;
  
  
  Report_Lib
&lt;/h3&gt;

&lt;p&gt;Report_Lib is an important library that is a prerequisite for most projects. Report_Lib to an application is like a batch file to an operating system, except Report_Lib is more powerful than a batch file or running an application with command line arguments.&lt;/p&gt;

&lt;p&gt;Report_Lib is a text file with instructions that can control and automate all aspects of the program. It can generate reports, documentation, or run data analysis with tables, equations, figures, and animations. Report_Lib was used to generate all the documentation for GpuScript and all the libraries documented on the GpuScript.com website.&lt;/p&gt;

&lt;p&gt;Report_Lib can perform thorough testing for debugging purposes, for both the CPU and GPU. Report_Lib replaces unit testing and profiling tools, and works well for both computation and graphics. &lt;/p&gt;

&lt;h3&gt;
  
  
  Project_Lib
&lt;/h3&gt;

&lt;p&gt;Project_Lib adds support for multiple projects in an application. Projects may be selected, created, copied, renamed, or archived.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup_Lib
&lt;/h3&gt;

&lt;p&gt;Backup_Lib makes it quick and easy to backup code or data locally or to an external hard drive. &lt;/p&gt;

&lt;h3&gt;
  
  
  Puppeteer_Lib
&lt;/h3&gt;

&lt;p&gt;Puppeteer_Lib can be used to automate the Google Chrome browser. Almost anything that can be done manually in a browser can be done with the Puppeteer_Lib, including searching and downloading data, language translation, maps, etc. Puppeteer_Lib reduces or eliminates the need for web APIs to perform similar tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud_Lib
&lt;/h3&gt;

&lt;p&gt;Cloud_Lib adds multi-user support and multi-process distributed computing to an application. For example, a desktop computer or laptop may have 8 CPU cores and a GPU. Running a single application may only utilize 20% of the GPU, and the main thread may only run on a single CPU core. Cloud_Lib allows the application to run multiple times on a single computer, fully utilizing the GPU and in this case obtaining a 5 times speedup. A local area network (LAN) with 10 computers could achieve a 50 times speedup. Ten LANs connected across the internet could achieve a 500 times speedup, depending on the application.&lt;/p&gt;

&lt;p&gt;Data may be cached for instant access. Methods and coroutines are easily tasked and optimally scheduled for distributed processing. Results are combined when processing is complete. &lt;/p&gt;

&lt;p&gt;Cloud_Lib keeps track of all connections, supports multiple licensing for multiple applications with password protection, and supports triple encryption. &lt;/p&gt;

&lt;h3&gt;
  
  
  Rand
&lt;/h3&gt;

&lt;p&gt;Random numbers are very useful for statistics, simulation, search, integration, scheduling, simulated annealing (traveling salesman problem), and Monte-Carlo methods. Random numbers are quick to initialize and generate on the GPU. The NVidia GPU Gems publication devotes 20 pages to a &lt;a href="https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-computing/chapter-37-efficient-random-number-generation-and-application" rel="noopener noreferrer"&gt;a random number generator in CUDA&lt;/a&gt;, but initializing random numbers on the GPU is stated as beyond the scope of the paper. It can take considerable time to initialize random numbers on the CPU and transfer the memory to the GPU. The Rand library also provides a variety of random number functions for generating different distributions and geometries.&lt;/p&gt;

&lt;p&gt;Rand was benchmarked at an equivalent 0.8 nanoseconds per floating-point random number (24 operations) on a GPU rated at 20 TFLOPS.&lt;/p&gt;

&lt;h3&gt;
  
  
  VGrid_Lib
&lt;/h3&gt;

&lt;p&gt;GpuScript contains a library for 3D volumetric rendering that is unrivaled in speed and resolution. Marching Cubes is commonly used to generate a set of triangles for each voxel. The number of triangles is initially unknown, often requiring extra computation to determine. Ill-formed triangles either result in inconsistencies or require additional computation and storage to remove or avoid.  The resulting triangles may require considerable memory storage and be quite inconsistent from one contour to the next. VGrid only requires storing or computing one value per voxel and renders smooth ray-traced contours directly on the GPU without the need to generate triangles. The results are full resolution CT-scans at hundreds of frames per second.&lt;/p&gt;

&lt;h3&gt;
  
  
  GEM_Lib
&lt;/h3&gt;

&lt;p&gt;Geometric Empirical Modeling (GEM) is a GpuScript library that revolutionizes AI neural networks. The initial motivation for writing GpuScript was to implement GEM on the GPU. GEM solves neural networks directly and instantaneously, including the neural network structure and all weights and offsets. Neural networks are the core of much of AI and machine learning, including the recently popular Large Language Models (LLMs). GEM eliminates the need for trial and error to determine the number of hidden layers and the number and type of nodes in each layer. GEM reduces or eliminates the need to simplify data representation, to reduce non-linearity, dimensionality, clustering, and input correlations. GEM eliminates high computation requirements for training, problems with over-fitting or under-fitting, problems with getting stuck in a local minimum, and problems selecting an optimal learning rate. In other words, GEM eliminates the need for AI experts, AI factories, and GPU super-clusters. GEM is a paradigm shift in AI. A separate paper dedicated to GEM will be published on Dev.to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Matrix_Lib
&lt;/h3&gt;

&lt;p&gt;GpuScript contains a high-speed library for matrix operations. Matrix multiplication was a standard benchmark for comparing supercomputer performance. Matrix multiplication is typically O(N^2), which means that a 4096 x 4096 matrix requires 32 million floating point multiplications and 32 million additions. Each thread of the GPU can change the order to O(N), requiring only 8192 FLOPS.&lt;/p&gt;

&lt;p&gt;The GpuScript Matrix library scales the matrix and combines multiplications using intrinsic addition. This changes operations from O(N^2) to O(1), meaning that each thread only requires a single floating-point operation. The result is an incredible 23 PFLOPS on a GPU rated at only 20 TFLOPS. The library can perform a matrix multiplication in the equivalent of 1.44 nanoseconds. &lt;br&gt;
However, GEM can be trained on matrix vector pairs and perform matrix inversion and singular value decomposition, typically O(N^3), in O(1). In these cases, the speedup is practically immeasurable. &lt;/p&gt;

&lt;h3&gt;
  
  
  FFT_Lib
&lt;/h3&gt;

&lt;p&gt;The GpuScript Fast Fourier Transform (FFT) library can compute a 4096 sample FFT in the equivalent of 3 nanoseconds. However, using the same technique of rescaling with intrinsic addition could significantly improve performance and allow transforming signals of arbitrary sizes. &lt;/p&gt;

&lt;h3&gt;
  
  
  Sort_Lib
&lt;/h3&gt;

&lt;p&gt;Sorting is typically O(N log N) on the CPU and O(log N) on the GPU. The GpuScript Sort library is O(1).&lt;br&gt;
The GpuScript Sort library can sort a 2048 floating point array in an equivalent 0.2 nanoseconds. This library will be expanded to sort 4M element arrays in O(1) using an entirely new sorting algorithm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Library Dependencies
&lt;/h3&gt;

&lt;p&gt;AppendBuff&lt;/p&gt;

&lt;p&gt;Backup&lt;/p&gt;

&lt;p&gt;BDraw =&amp;gt; AppendBuff&lt;/p&gt;

&lt;p&gt;Cloud =&amp;gt; Puppeteer&lt;/p&gt;

&lt;p&gt;FFT&lt;/p&gt;

&lt;p&gt;GEM =&amp;gt; AppendBuff, Rand&lt;/p&gt;

&lt;p&gt;Matrix&lt;/p&gt;

&lt;p&gt;OCam =&amp;gt; BDraw&lt;/p&gt;

&lt;p&gt;Project&lt;/p&gt;

&lt;p&gt;Puppeteer&lt;/p&gt;

&lt;p&gt;Rand&lt;/p&gt;

&lt;p&gt;Report =&amp;gt; Puppeteer&lt;/p&gt;

&lt;p&gt;VGrid =&amp;gt; BDraw&lt;/p&gt;

&lt;p&gt;Views&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GpuScript presents a new approach to GPU program development. Debugging is made possible using a full GPU simulator running on a single thread. CPU/GPU memory transfer is significantly reduced using memory IO management and moving the majority of the workload from the CPU to the GPU. UI development is made easier by embedding the UI directly into the code. GPU development is also embedded directly into the code. Code translation allows both GPU computation and graphics to be entirely implemented in C#, without the need to learn CUDA, HLSL, ShaderLab, or other GPU languages.&lt;/p&gt;

&lt;p&gt;GpuScript is a paradigm shift in programming. It is not the same as changing from Java to C#, which both have essentially the same productivity and performance. Programmers are used to adapting to small shifts in technology, but GpuScript is an extinction event. If computer languages were selected on the basis of productivity and performance, GpuScript would result in the near extinction of all other programming languages. People are naturally resistant to extreme changes that revolutionize entire industries. Paradigm shifts usually require considerable time to become mainstream.&lt;/p&gt;

&lt;p&gt;The bottom line: The average programmer using GpuScript can more efficiently complete projects that run orders of magnitude faster with a smaller learning curve. No matter the dedication, motivation, perseverance, intelligence, or hard work, it's the tools that make all the difference. More grass can be cut with a lawnmower than with scissors, more snow can be moved with a snowplow than with a teaspoon, and more computation can be achieved on a laptop with GpuScript than any other language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Alan-Rock-GS/GpuScript" rel="noopener noreferrer"&gt;GpuScript on Github, free and open source&lt;/a&gt;&lt;/p&gt;

</description>
      <category>csharp</category>
      <category>gpu</category>
      <category>gpgpu</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
