What is and is not "artificial intelligence"?

This term gets thrown around a lot, but what do people mean when they say this?

Did you find this post useful? Show some love!

I prefer a popular movie, imitation game as a framework to define artificial intelligence. Since most computer science students know that Alan Turing is the father of AI and that movie is so popular.


H is human as intelligent body.

H.I is quantified human intelligent.

M is machine as entity that does not have intelligent.

M.I is machine's artificial attribute, such that it has own intelligent.


F(.) is implemented functions to imitate H.I, such that function for walk is F(H.I.walk).

L(..) is aggregation functions such that accept function as arguments.


An intelligent is aggregation of decisions, such that to be able to walk is set of consciousness to be walking: H.I.walk = {start, velocity, turn, avoid, ...}

H.I is always improve in the space of time. So that new H.I can produced using L(..), such that to improve intelligent regardless specific skill need to be improved: H.I = L(H.I)


A mechanical intelligent is any functions F(.) to produce desecrate imitation of H.I, such that to make machine being able to walk: M.I.walk = F(H.I.walk)

An artificial intelligent is any aggregation functions L(F(.)) to produce continuous imitation of H.I, such that to make machine being able to learn to walk: M.I.walk = L(F(walk)), where walk = F(H.I.walk).

I don't classify anything of what our computers do now as AI. It's all rather sophisticated statistics, and some really good algorithms, but isn't really intelligent.

I think I'm holding out for something that:

  • learn on its own without domain specific programming
  • find solutions in novel scenarios
  • arrive at those solutions instantly

Most of our current "AI" seems to fail at all of these criteria. Things like Alpha Go, or classic video game "AIs", are highly specific. They often can't even adapt to similar style games without reprogramming. Most of them require training and simulation and thus don't adapt well to novel scenarios -- this is a point of concert for automated cars that face a potential infinite amount of weird road activity. The final point is about instantly fails since most of our "AI" requires training, and/or long calculation times, whereas a rational/thinking/intelligent being can leap to conclusions.

I think for your first point, you meant to say "without domain-specific programming." For your last two points, I don't quite agree. Human beings don't arrive at solutions instantly and finding solutions in "novel scenarios" is a matter of degree: After all, any human invention you care to look at is always born in the context of existing experience and knowledge. Both in science and in art, there is always a kind of evolution that happens, with creative people responding in some way to the state of the art up to that point in time.

(Fixed first bit)

I was unsure of the wording on the second two points. By "instant" I mean without needing to reprogram the system or invoke new training. For example, playing a video game, a human can apply previous knowledge to new levels of the game, allowing them to get by the level the first time they encounter it. The AI's so far don't really achieve this -- they can't reapply previous knowledge well, they don't make abstractions and logical judgements.

This applies to novel situations as well. A human can encounter a room full of completely new objects, and based on affordances and constraints, determine what they might do. A statistics programmed machine, as seen so far, would not be able to do this, and would not be able to figure out these novel items.

I don't know if it's the same thing you're getting at, but one thing that I think is missing from well-known machine learning approaches is "meta-cognition." As human beings, we have the awareness that we don't know something, and we can takes steps ourselves to learn more about any given subject. I don't know how much progress there's been in this area for AI. I suspect there is a pretty wide divide between current approaches and this kind of learning though.

One of my instructors (early '00s) defined it as either:

  • Making machines act/think rationally
  • Making machines act/think like a human

I would add:

  • Making machines do what humans want without humans having to babysit/spoonfeed the machines

The google assistant demo appears to be a perfect demonstration of this: the customer just had to say "make me a hair appointment at x time", and the receptionist didn't need to know any special information about how to interact with the bot. She just talked to it like a human. I didn't hear either party have to specially modulate their voice to be understood by the bot, or work around its bugs. Obviously it was a short and specially crafted demo, so it's hard to say how well it will work in practice.

Lately it pretty much seems to mean Machine Learning. First generation AI involved more intentionally programmed intelligence, e.g. a database of facts and rules for reasoning from them.

Robert Miles (whose videos mostly focus on AI Safety) defined an Artificial General Intelligence as a machine that can do anything at least as well as a human.


Hey there, we see you aren't signed in. (Yes you, the reader. This is a fake comment.)

Please consider creating an account on dev.to. It literally takes a few seconds and we'd appreciate the support so much. ❤️

Plus, no fake comments when you're signed in. 🙃

It is a slippery thing to define, as captured in the popular quote:

A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright

  • Fred Reed

Generally this is called the AI Effect

I think Douglas Hofstadter puts it best: "AI is whatever hasn't been done yet."

AI is kind of a "loaded" term. It's clear that none of the AI or machine learning systems we have today demonstrates the kind of self-aware intelligence that human beings are capable of. However, AI is definitely getting better at solving problems that involve recognizing patterns and finding algorithms that are not hand-coded into the system by a human programmer.

In many cases the AI algorithm gets better at solving problems in specific domains than the top human experts. AlphaZero is a great example of this. It learned to play Chess, Go, and Shogi better than any human, on its own, without using any heuristics or human games. The thing to note here is that the AI in this case develops an exquisitely precise positional analysis. In other words, it is able to make the kinds of judgements that are characteristic of human intuition.

For reference, the paper The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities is really interesting. Also, see DeepMind's paper about AlphaZero, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.

I also wrote an article here called AlphaGo: Observations about Machine Intelligence.

"the theory and development of computer systems able to perform tasks that normally require human intelligence"

So whatever humans thought made them special, but find that machines can do. Poptech with high creepiness factor maybe?


What science fiction means by it is Skynet or The Matrix -- a singularity where machines suddenly switch over to being sentient life. What engineers marketing means when talking about their AI product is: fuzzy pattern matching.

Bonus: Machine Learning is fuzzy pattern matching based on prior data.

I think we have to be careful when defining terms, especially when the only frame of reference we have is so ill-defined. In this case that's human intelligence. And I could be wrong, but in my experience terms and topics which get thrown around the most or talked about incessantly are the ones we know the least about.

It's also important to mention Turing here, who wrote in his famous 1950 "Imitation Game" paper that the question of whether machines think is "too meaningless" to answer. If someone asks you whether submarines swim, well, that's sort of like that. You want to call that swimming? Fine. We generally just happen to define swimming as an innate animal trait. Noam Chomsky, among others, has talked about this as well.

What people currently call artificial intelligence is really another way of saying statistical modelling. We still don't have a proper definition of intelligence let alone artificial intelligence.

I saw this article recently, which helped my definitions of the terms involved: datasciencecentral.com/profiles/bl...

Coincidentally, I just listened to this podcast interview.


Google Duplex is and Self Driving cars are AI

Terminator is not

Classic DEV Post from Jul 27 '17

Explain containers like I'm five

I heard a lot (read a little) about Containers, Docker, Azure Container Service...

Follow @mmg to see more of their posts in your feed.
Ben Halpern
A Canadian living in New York, having a lot of fun cultivating this community! Creator and webmaster of dev.to.
Trending on dev.to
If you're a frontend developer, How did you get your first job?
#frontend #careers #jobs #discuss
What's development like with a Surface laptop? Worth switching from OSX?
#discuss #productivity
11 Mistakes To Avoid On A Technical Interview
#interview #problemsolving #career #job
Go or Python and why?
#go #python #discuss
Taking Notes
#productivity #management #discuss
Sweating the very small design details: external links
#css #webdev #design #discuss
Devs, what tools do you use to keep track of API changes?
Where do you go as a developer for info on your tech stack?
#discuss #web #programming