DEV Community

shangkyu shin
shangkyu shin

Posted on • Originally published at zeromathai.com

AI Scientific Methodology (1990–2010): How AI Shifted from Rules to Probabilistic Learning and Neural Networks

Cross-posted from Zeromath. Original article: https://zeromathai.com/en/ai-scientific-methodology-en/

AI did not become modern just because models got bigger. The real turning point came when the field moved away from hand-written rules and started treating intelligence as something that could be modeled mathematically, learned from data, and evaluated under uncertainty. From roughly 1990 to 2010, AI shifted from symbolic systems toward probabilistic reasoning, optimization, neural networks, and generalization theory. This period matters because it laid the foundation for modern machine learning long before deep learning became dominant.

If you want the connected background, these related topics are useful:


Why This Period Was a Real Paradigm Shift

After the limitations of expert systems and the AI Winter, the field faced a hard question:

How can AI become a real science instead of a collection of brittle hand-built tricks?

That question changed the direction of AI.

Earlier systems often depended on:

  • explicit rules
  • manual knowledge encoding
  • narrow problem settings
  • fragile symbolic logic

The emerging view was different:

  • real-world environments contain uncertainty
  • useful systems must adapt to data
  • performance should be measured empirically
  • learning should be formalized mathematically

This was the moment AI became much more statistical, predictive, and optimization-driven.


1. From Rule-Based AI to Learning-Based AI

Before this shift, many AI systems were built by writing knowledge directly into the machine.

That worked in small, controlled domains, but it created a serious problem:

humans had to keep encoding the intelligence manually

This became known as the knowledge bottleneck.

The new insight was simple but powerful:

real-world intelligence cannot be fully captured as a fixed rule set

Instead, AI systems needed to:

  • learn from examples
  • handle noisy data
  • model uncertainty
  • adapt when distributions change

That shift is what separates classical symbolic AI from the more scientific AI methodology that followed.


2. What “Scientific AI” Really Means

Calling this phase “scientific” does not just mean it sounded more rigorous. It means the field changed how it asked questions and how it validated results.

The new workflow looked more like this:

  1. define a mathematical model
  2. choose a training objective
  3. optimize parameters on data
  4. evaluate on held-out examples
  5. compare performance quantitatively

That mindset introduced a more disciplined foundation for AI.

Three major pillars became especially important:

  • Neural networks for learning from data
  • Probabilistic models for reasoning under uncertainty
  • Learning theory for understanding generalization

Together, these changed AI from a rule-writing discipline into a model-building discipline.


3. Neural Networks: Learning Patterns from Data

Related topic:

https://zeromathai.com/en/neural-network-en/

Neural networks became important because they replaced one of the biggest limitations of earlier AI:

instead of writing the rules directly, let the model learn a function from data

Basic structure

A neural network usually includes:

  • an input layer
  • one or more hidden layers
  • an output layer

Its behavior is controlled by:

  • weights
  • biases
  • nonlinear transformations

Core idea

The model learns a mapping:

[
\hat{y} = f(x; \theta)
]

Where:

  • (x) = input
  • (\theta) = parameters
  • (\hat{y}) = prediction

Training loop

A standard training cycle looks like this:

  1. forward pass
  2. compute loss
  3. backpropagation
  4. parameter update

A common update rule is:

[
\theta \leftarrow \theta - \eta \nabla_{\theta} L
]

Where:

  • (\eta) = learning rate
  • (L) = loss function

Why this mattered

Neural networks were useful in domains where rules were too hard to write by hand, such as:

  • image recognition
  • speech recognition
  • pattern classification
  • early machine translation

Developer intuition

A rule-based system says:

  • “If condition A and B hold, output C.”

A neural model says:

  • “Give me examples, define a loss, and I’ll learn parameters that reduce prediction error.”

That is a fundamentally different engineering mindset.

Main limitation

Even in this era, neural networks had obvious weaknesses:

  • hard to interpret
  • sensitive to data quality
  • dependent on optimization behavior
  • vulnerable to distribution shift

So they were powerful, but not magic.


4. Bayesian Networks: Reasoning Under Uncertainty

Related topic:

https://zeromathai.com/en/bayesiannet-en/

While neural networks focused on learning patterns, Bayesian networks focused on modeling uncertainty explicitly.

This mattered because real-world AI is rarely operating with perfect information.

Core idea

A Bayesian network is a probabilistic graphical model:

  • nodes represent variables
  • edges represent dependencies
  • the graph is directed and acyclic

Its factorization looks like this:

[
P(X_1, ..., X_n) = \prod_{i=1}^{n} P(X_i \mid Parents(X_i))
]

This is useful because a complex joint distribution can be decomposed into local conditional dependencies.

Example

Suppose we model:

  • Rain
  • Sprinkler
  • Wet ground

If we observe wet ground, the system can infer:

  • the probability that it rained
  • the probability that the sprinkler was on

This is much more realistic than assuming certainty everywhere.

Why Bayesian networks mattered

They brought several strengths:

  • uncertainty is explicit
  • dependencies are interpretable
  • inference can be structured
  • causal-style reasoning becomes easier to express

Compared with rule systems, this was a more natural fit for messy real-world data.


5. The Bigger Shift: From Deterministic Logic to Probabilistic Reasoning

One of the deepest changes in this phase was conceptual:

AI moved from deterministic logic toward probabilistic reasoning

That sounds simple, but it changed the field completely.

Earlier style

  • IF condition → THEN result
  • exact symbolic logic
  • assumes clean inputs and fixed rules

New style

  • estimate (P(outcome \mid data))
  • update beliefs with evidence
  • make the best decision under uncertainty

Why this mattered

Real-world data is often:

  • incomplete
  • noisy
  • ambiguous
  • uncertain

Probability gave AI a framework for dealing with that reality instead of pretending it did not exist.

This is one of the main reasons the field became more scalable and more useful.


6. Intelligent Agents as a Unifying Framework

Related topic:

https://zeromathai.com/en/agent-vs-intelligent-agent--en/

During this period, AI systems were also increasingly described as intelligent agents.

That framing helped unify several subfields.

An intelligent agent is a system that:

  • perceives an environment
  • chooses actions
  • pursues goals

Why this framework mattered

It connected:

  • perception
  • reasoning
  • planning
  • learning
  • action

into one structure

Instead of treating AI as separate topics, the agent view made it easier to describe systems as end-to-end decision-makers operating in environments.

For developers, this is useful because it turns AI into a systems question, not just a model question.


7. Learning Theory: Why Generalization Matters

This period also made the field ask more formal questions about learning itself:

  • When does a model generalize?
  • How much data is enough?
  • What causes overfitting?
  • What is the trade-off between bias and variance?

This was important because AI stopped being only about fitting known examples.

The deeper goal became:

perform well on unseen data

That distinction is one of the most important ideas in all of machine learning.

Key concepts

  • training set vs test set
  • overfitting
  • bias–variance tradeoff
  • generalization error

Without these ideas, model evaluation would remain shallow and unreliable.


8. Optimization Became the Engine of AI

Related topic:

https://zeromathai.com/en/optimization-concept-en/

As this new methodology matured, a powerful common pattern became clearer:

many AI problems can be written as optimization problems

That applies across:

  • neural networks
  • probabilistic models
  • machine learning algorithms

General pattern

  1. define a model
  2. define a loss or objective
  3. optimize parameters
  4. evaluate results

This perspective unified many previously separate methods.

Why this mattered

Optimization became the practical engine that connected theory to implementation.

In many systems, intelligence increasingly looked like this:

prediction + objective function + optimization loop

That is a very different picture from classical symbolic AI.


9. Old AI vs Scientific AI

A direct comparison helps make the transition clearer.

Aspect Old AI (Symbolic) Scientific AI
Knowledge Hand-coded Learned from data
Reasoning Logical and explicit Statistical and probabilistic
Adaptability Low Higher
Scalability Weak Stronger
Data usage Minimal Essential

This table is simplified, but it captures the broad movement.

Earlier AI tried to encode intelligence directly.

Scientific AI tried to model and learn intelligence from data.

That change made modern machine learning possible.


10. Why This Phase Changed Everything

This phase laid the groundwork for later breakthroughs in:

  • machine learning
  • deep learning
  • large-scale predictive systems
  • modern data-driven AI

Without this transition:

  • many computer vision systems would have remained brittle
  • modern recommendation systems would be far weaker
  • neural sequence models would have struggled to emerge
  • LLMs would not have a usable training paradigm behind them

The biggest change was not just technical. It was conceptual.

Hidden insight

Earlier AI often tried to be intelligent through explicit logic.

This era increasingly focused on predicting well under uncertainty.

That was one of the most important redefinitions in the history of AI.


11. What This Phase Still Could Not Fully Solve

Even with all this progress, the new methodology introduced its own limitations.

Neural networks

  • difficult to interpret
  • sensitive to data shifts
  • performance can depend heavily on tuning

Probabilistic models

  • can become computationally complex
  • require modeling assumptions that may be unrealistic
  • inference may become expensive at scale

Learning in general

  • good performance still depends on data quality
  • evaluation can be misleading if benchmarks are weak
  • generalization is never guaranteed for free

So this period solved many problems, but it also introduced the modern trade-offs we still live with.


A Simple Mental Model for 1990–2010

If you want a compressed summary of this era, think of it like this:

rules → probability → learning → optimization

That captures the broad direction of the field.

  • symbolic AI emphasized explicit rules
  • probabilistic AI modeled uncertainty
  • machine learning emphasized learning from examples
  • optimization became the engine connecting model design and performance

This is why the period feels like a real methodological reboot.


Key Takeaways

  • this phase moved AI away from brittle hand-written rule systems
  • neural networks made learning from data central
  • Bayesian networks made uncertainty a first-class part of reasoning
  • probabilistic thinking replaced purely deterministic logic in many settings
  • learning theory made generalization a formal concern
  • optimization became the common engine behind many AI methods
  • modern machine learning and deep learning are built on foundations set during this period

Conclusion

The scientific methodology phase of AI, roughly 1990 to 2010, was the period when the field became much more mathematical, empirical, and data-driven. Instead of treating intelligence as something that had to be encoded manually, researchers increasingly treated it as something that could be learned, optimized, and evaluated under uncertainty.

That shift changed the field permanently.

Neural networks, probabilistic models, intelligent agents, learning theory, and optimization did not just improve AI techniques. Together, they changed what AI was understood to be.

I’m curious how others think about this transition. Was this the moment AI became a real engineering discipline, or do you see it as a gradual extension of the older symbolic tradition?

Top comments (0)