I don't classify anything of what our computers do now as AI. It's all rather sophisticated statistics, and some really good algorithms, but isn't really intelligent.
I think I'm holding out for something that:
Most of our current "AI" seems to fail at all of these criteria. Things like Alpha Go, or classic video game "AIs", are highly specific. They often can't even adapt to similar style games without reprogramming. Most of them require training and simulation and thus don't adapt well to novel scenarios -- this is a point of concert for automated cars that face a potential infinite amount of weird road activity. The final point is about instantly fails since most of our "AI" requires training, and/or long calculation times, whereas a rational/thinking/intelligent being can leap to conclusions.
I think for your first point, you meant to say "without domain-specific programming." For your last two points, I don't quite agree. Human beings don't arrive at solutions instantly and finding solutions in "novel scenarios" is a matter of degree: After all, any human invention you care to look at is always born in the context of existing experience and knowledge. Both in science and in art, there is always a kind of evolution that happens, with creative people responding in some way to the state of the art up to that point in time.
(Fixed first bit)
I was unsure of the wording on the second two points. By "instant" I mean without needing to reprogram the system or invoke new training. For example, playing a video game, a human can apply previous knowledge to new levels of the game, allowing them to get by the level the first time they encounter it. The AI's so far don't really achieve this -- they can't reapply previous knowledge well, they don't make abstractions and logical judgements.
This applies to novel situations as well. A human can encounter a room full of completely new objects, and based on affordances and constraints, determine what they might do. A statistics programmed machine, as seen so far, would not be able to do this, and would not be able to figure out these novel items.
I don't know if it's the same thing you're getting at, but one thing that I think is missing from well-known machine learning approaches is "meta-cognition." As human beings, we have the awareness that we don't know something, and we can takes steps ourselves to learn more about any given subject. I don't know how much progress there's been in this area for AI. I suspect there is a pretty wide divide between current approaches and this kind of learning though.
We’re a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.