DEV Community

What is and is not "artificial intelligence"?

Ben Halpern on May 14, 2018

This term gets thrown around a lot, but what do people mean when they say this?

Collapse
 
bladefidz profile image
Hafidz Jazuli Luthfi • Edited

I prefer a popular movie, imitation game as a framework to define artificial intelligence. Since most computer science students know that Alan Turing is the father of AI and that movie is so popular.

Let:

H is human as intelligent body.

H.I is quantified human intelligent.

M is machine as entity that does not have intelligent.

M.I is machine's artificial attribute, such that it has own intelligent.

Then:

F(.) is implemented functions to imitate H.I, such that function for walk is F(H.I.walk).

L(..) is aggregation functions such that accept function as arguments.

Assume:

An intelligent is aggregation of decisions, such that to be able to walk is set of consciousness to be walking: H.I.walk = {start, velocity, turn, avoid, ...}

H.I is always improve in the space of time. So that new H.I can produced using L(..), such that to improve intelligent regardless specific skill need to be improved: H.I = L(H.I)

Then:

A mechanical intelligent is any functions F(.) to produce desecrate imitation of H.I, such that to make machine being able to walk: M.I.walk = F(H.I.walk)

An artificial intelligent is any aggregation functions L(F(.)) to produce continuous imitation of H.I, such that to make machine being able to learn to walk: M.I.walk = L(F(walk)), where walk = F(H.I.walk).

Collapse
 
nestedsoftware profile image
Nested Software • Edited

AI is kind of a "loaded" term. It's clear that none of the AI or machine learning systems we have today demonstrates the kind of self-aware intelligence that human beings are capable of. However, AI is definitely getting better at solving problems that involve recognizing patterns and finding algorithms that are not hand-coded into the system by a human programmer.

In many cases the AI algorithm gets better at solving problems in specific domains than the top human experts. AlphaZero is a great example of this. It learned to play Chess, Go, and Shogi better than any human, on its own, without using any heuristics or human games. The thing to note here is that the AI in this case develops an exquisitely precise positional analysis. In other words, it is able to make the kinds of judgements that are characteristic of human intuition.

For reference, the paper The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities is really interesting. Also, see DeepMind's paper about AlphaZero, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.

I also wrote an article here called AlphaGo: Observations about Machine Intelligence.

Collapse
 
cathodion profile image
Dustin King • Edited

One of my instructors (early '00s) defined it as either:

  • Making machines act/think rationally
  • Making machines act/think like a human

I would add:

  • Making machines do what humans want without humans having to babysit/spoonfeed the machines

The google assistant demo appears to be a perfect demonstration of this: the customer just had to say "make me a hair appointment at x time", and the receptionist didn't need to know any special information about how to interact with the bot. She just talked to it like a human. I didn't hear either party have to specially modulate their voice to be understood by the bot, or work around its bugs. Obviously it was a short and specially crafted demo, so it's hard to say how well it will work in practice.

Lately it pretty much seems to mean Machine Learning. First generation AI involved more intentionally programmed intelligence, e.g. a database of facts and rules for reasoning from them.

Robert Miles (whose videos mostly focus on AI Safety) defined an Artificial General Intelligence as a machine that can do anything at least as well as a human.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y • Edited

I don't classify anything of what our computers do now as AI. It's all rather sophisticated statistics, and some really good algorithms, but isn't really intelligent.

I think I'm holding out for something that:

  • learn on its own without domain specific programming
  • find solutions in novel scenarios
  • arrive at those solutions instantly

Most of our current "AI" seems to fail at all of these criteria. Things like Alpha Go, or classic video game "AIs", are highly specific. They often can't even adapt to similar style games without reprogramming. Most of them require training and simulation and thus don't adapt well to novel scenarios -- this is a point of concert for automated cars that face a potential infinite amount of weird road activity. The final point is about instantly fails since most of our "AI" requires training, and/or long calculation times, whereas a rational/thinking/intelligent being can leap to conclusions.

Collapse
 
nestedsoftware profile image
Nested Software

I think for your first point, you meant to say "without domain-specific programming." For your last two points, I don't quite agree. Human beings don't arrive at solutions instantly and finding solutions in "novel scenarios" is a matter of degree: After all, any human invention you care to look at is always born in the context of existing experience and knowledge. Both in science and in art, there is always a kind of evolution that happens, with creative people responding in some way to the state of the art up to that point in time.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

(Fixed first bit)

I was unsure of the wording on the second two points. By "instant" I mean without needing to reprogram the system or invoke new training. For example, playing a video game, a human can apply previous knowledge to new levels of the game, allowing them to get by the level the first time they encounter it. The AI's so far don't really achieve this -- they can't reapply previous knowledge well, they don't make abstractions and logical judgements.

This applies to novel situations as well. A human can encounter a room full of completely new objects, and based on affordances and constraints, determine what they might do. A statistics programmed machine, as seen so far, would not be able to do this, and would not be able to figure out these novel items.

Thread Thread
 
nestedsoftware profile image
Nested Software • Edited

I don't know if it's the same thing you're getting at, but one thing that I think is missing from well-known machine learning approaches is "meta-cognition." As human beings, we have the awareness that we don't know something, and we can takes steps ourselves to learn more about any given subject. I don't know how much progress there's been in this area for AI. I suspect there is a pretty wide divide between current approaches and this kind of learning though.

Collapse
 
evanoman profile image
Evan Oman • Edited

It is a slippery thing to define, as captured in the popular quote:

A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright

  • Fred Reed

Generally this is called the AI Effect

Collapse
 
goyder profile image
goyder

I think Douglas Hofstadter puts it best: "AI is whatever hasn't been done yet."

Collapse
 
jasman7799 profile image
Jarod Smith

I think non-technical people use A.I. to describe systems which make decisions or calculations that people would make ordinarily. Technical types are usually referring to deep-learning systems nowadays. I think if you are looking for an accurate definition, it's hard since the statement is redundant. Artificial means to be human-made and intelligence is the ability to adapt the solution to new problems.

The problem with this is that it's also the definition for normal human problem-solving. As such, I think machine-intelligence is more accurate and describes the ability of a computer to adapt solutions to new problems sets. Then there are degrees generality which describe how abstractly the machine can "think".

  • ANI or narrow intelligence, the ability to solve a narrow set of problem with some set of experience
  • AGI, general intelligence, the ability to solve a wide set of problems with a set of experience
  • ASI, super intelligence a term scientists use describe a system which can solve a set of problems which are out of the domain of humans. (aka smarter than people)
Collapse
 
andrewlucker profile image
Andrew Lucker

"the theory and development of computer systems able to perform tasks that normally require human intelligence"

So whatever humans thought made them special, but find that machines can do. Poptech with high creepiness factor maybe?

atlasobscura.com/articles/the-vode...

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

What science fiction means by it is Skynet or The Matrix -- a singularity where machines suddenly switch over to being sentient life. What engineers marketing means when talking about their AI product is: fuzzy pattern matching.

Bonus: Machine Learning is fuzzy pattern matching based on prior data.

Collapse
 
usamaashraf profile image
Usama Ashraf

I think we have to be careful when defining terms, especially when the only frame of reference we have is so ill-defined. In this case that's human intelligence. And I could be wrong, but in my experience terms and topics which get thrown around the most or talked about incessantly are the ones we know the least about.

It's also important to mention Turing here, who wrote in his famous 1950 "Imitation Game" paper that the question of whether machines think is "too meaningless" to answer. If someone asks you whether submarines swim, well, that's sort of like that. You want to call that swimming? Fine. We generally just happen to define swimming as an innate animal trait. Noam Chomsky, among others, has talked about this as well.

Collapse
 
aswathm78 profile image
Aswath KNM

Google Duplex is and Self Driving cars are AI

Terminator is not

Collapse
 
pbnj profile image
Peter Benjamin (they/them)

Coincidentally, I just listened to this podcast interview.

youtu.be/3W7jibj1X_8

Collapse
 
grahamlyons profile image
Graham Lyons

I saw this article recently, which helped my definitions of the terms involved: datasciencecentral.com/profiles/bl...