This term gets thrown around a lot, but what do people mean when they say this?
For further actions, you may consider blocking this person and/or reporting abuse
This term gets thrown around a lot, but what do people mean when they say this?
For further actions, you may consider blocking this person and/or reporting abuse
Arpit -
Alex -
Sotiris Kourouklis -
Jess Lee -
Top comments (16)
I prefer a popular movie, imitation game as a framework to define artificial intelligence. Since most computer science students know that Alan Turing is the father of AI and that movie is so popular.
AI is kind of a "loaded" term. It's clear that none of the AI or machine learning systems we have today demonstrates the kind of self-aware intelligence that human beings are capable of. However, AI is definitely getting better at solving problems that involve recognizing patterns and finding algorithms that are not hand-coded into the system by a human programmer.
In many cases the AI algorithm gets better at solving problems in specific domains than the top human experts. AlphaZero is a great example of this. It learned to play Chess, Go, and Shogi better than any human, on its own, without using any heuristics or human games. The thing to note here is that the AI in this case develops an exquisitely precise positional analysis. In other words, it is able to make the kinds of judgements that are characteristic of human intuition.
For reference, the paper The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities is really interesting. Also, see DeepMind's paper about AlphaZero, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.
I also wrote an article here called AlphaGo: Observations about Machine Intelligence.
One of my instructors (early '00s) defined it as either:
I would add:
The google assistant demo appears to be a perfect demonstration of this: the customer just had to say "make me a hair appointment at x time", and the receptionist didn't need to know any special information about how to interact with the bot. She just talked to it like a human. I didn't hear either party have to specially modulate their voice to be understood by the bot, or work around its bugs. Obviously it was a short and specially crafted demo, so it's hard to say how well it will work in practice.
Lately it pretty much seems to mean Machine Learning. First generation AI involved more intentionally programmed intelligence, e.g. a database of facts and rules for reasoning from them.
Robert Miles (whose videos mostly focus on AI Safety) defined an Artificial General Intelligence as a machine that can do anything at least as well as a human.
I don't classify anything of what our computers do now as AI. It's all rather sophisticated statistics, and some really good algorithms, but isn't really intelligent.
I think I'm holding out for something that:
Most of our current "AI" seems to fail at all of these criteria. Things like Alpha Go, or classic video game "AIs", are highly specific. They often can't even adapt to similar style games without reprogramming. Most of them require training and simulation and thus don't adapt well to novel scenarios -- this is a point of concert for automated cars that face a potential infinite amount of weird road activity. The final point is about instantly fails since most of our "AI" requires training, and/or long calculation times, whereas a rational/thinking/intelligent being can leap to conclusions.
I think for your first point, you meant to say "without domain-specific programming." For your last two points, I don't quite agree. Human beings don't arrive at solutions instantly and finding solutions in "novel scenarios" is a matter of degree: After all, any human invention you care to look at is always born in the context of existing experience and knowledge. Both in science and in art, there is always a kind of evolution that happens, with creative people responding in some way to the state of the art up to that point in time.
(Fixed first bit)
I was unsure of the wording on the second two points. By "instant" I mean without needing to reprogram the system or invoke new training. For example, playing a video game, a human can apply previous knowledge to new levels of the game, allowing them to get by the level the first time they encounter it. The AI's so far don't really achieve this -- they can't reapply previous knowledge well, they don't make abstractions and logical judgements.
This applies to novel situations as well. A human can encounter a room full of completely new objects, and based on affordances and constraints, determine what they might do. A statistics programmed machine, as seen so far, would not be able to do this, and would not be able to figure out these novel items.
I don't know if it's the same thing you're getting at, but one thing that I think is missing from well-known machine learning approaches is "meta-cognition." As human beings, we have the awareness that we don't know something, and we can takes steps ourselves to learn more about any given subject. I don't know how much progress there's been in this area for AI. I suspect there is a pretty wide divide between current approaches and this kind of learning though.
It is a slippery thing to define, as captured in the popular quote:
Generally this is called the AI Effect
I think Douglas Hofstadter puts it best: "AI is whatever hasn't been done yet."
I think non-technical people use A.I. to describe systems which make decisions or calculations that people would make ordinarily. Technical types are usually referring to deep-learning systems nowadays. I think if you are looking for an accurate definition, it's hard since the statement is redundant. Artificial means to be human-made and intelligence is the ability to adapt the solution to new problems.
The problem with this is that it's also the definition for normal human problem-solving. As such, I think machine-intelligence is more accurate and describes the ability of a computer to adapt solutions to new problems sets. Then there are degrees generality which describe how abstractly the machine can "think".
"the theory and development of computer systems able to perform tasks that normally require human intelligence"
So whatever humans thought made them special, but find that machines can do. Poptech with high creepiness factor maybe?
atlasobscura.com/articles/the-vode...
What science fiction means by it is Skynet or The Matrix -- a singularity where machines suddenly switch over to being sentient life. What
engineersmarketing means when talking about their AI product is: fuzzy pattern matching.Bonus: Machine Learning is fuzzy pattern matching based on prior data.
I think we have to be careful when defining terms, especially when the only frame of reference we have is so ill-defined. In this case that's human intelligence. And I could be wrong, but in my experience terms and topics which get thrown around the most or talked about incessantly are the ones we know the least about.
It's also important to mention Turing here, who wrote in his famous 1950 "Imitation Game" paper that the question of whether machines think is "too meaningless" to answer. If someone asks you whether submarines swim, well, that's sort of like that. You want to call that swimming? Fine. We generally just happen to define swimming as an innate animal trait. Noam Chomsky, among others, has talked about this as well.
Google Duplex is and Self Driving cars are AI
Terminator is not