Saturday, March 19, 2011

Why AI Needs to be an Artist

There have been some recent comments suggesting that if we can just get AIs that can see every single possible scenario, try every single path through the maze of possible futures, then we'd have AI that could solve any problem without the need for creativity that we humans have.

“We used to think “Chess” required “creativity” but we’ve since discovered that a fast computer evaluating ALL POSSIBLE PATHS completely blows human creativity away”

What is being suggested is a brute-force solution to AGI. Essentially a “Librarian of Babel” as per Jorge Luis Borges’ short story. An uber-Google which is capable of searching through every possible “book” capable of being composed by myriad monkies banging on typewriters — that is, a near-unlimited possibility space. With its near-infinite computing power it can suss out amongst the Vast sea of unparsable garble, slightly less vast sea of grammatically correct nonsense, even less vast – but still impossibly vast – sea of dreck, tomorrow’s best-seller “How to upload a brain for dummies (humans)” or whatever Hard Problem we might like a super AI to handwave away for us. While this is not impossible in theory, there are it seems as many, possibly more limiting factors for such an AI (if we can even call it intelligence rather than a souped-up lookup table) than for an AI which operates using intelligence closer in mind-space to humans.

The first limiting factor is testability. The example of Deep Blue vs Kasparov wherein a computer “beat” a human at chess is flawed because the problem (win at chess) and the series of solution criteria (play through every scenario and see which leads to checkmate) are clearly defined and easy for a computer to check whether a given trial is successful. For a problem we might want a Singularity to solve for us such as “Discover a cure for cancer”, the rate at which the computer could perform successful tests is limited, since any given trial cure can only be tested at the rate of clinical trials for cancer treatments, which brings up a bottleneck that makes it nigh-infinitely inferior to human medical science. (Not to mention unethical; trying randomly generated compounds on patients?) Or how about the problem, “Figure out how to upload my brain into a digital substrate”. Since this is a simple brute-force search AGI, not human-like, it would be necessary for humans to interact with every test-upload experiment of the AGIs to confirm whether or not the uploaded entity is in fact a replica of the meat-human and not some different individual or just a hashed pile of silicon-goo. This would again slow the traversing of the “data space” down to an utterly useless crawl, leaving our resources better invested in producing human-like, creative AI.

Then you have the problem that even with these potential gains in computing power, grapheme, quantum, etc. (and there is no guarantee quantum computing will pan out soon or at all), even if we were to boost our machines by multiple magnitudes of computing power, the possibility spaces of non-trivial real-world problems which include the space of every field and dimension of reality (as opposed to a contained mathematical abstraction such as a chess or go game) quickly dwarfs any uncertain gains in magnitude we might make. For example, even the possibility space extent in the Library of Babel — that is, all possible books 500 pages in length, with 40 lines per page and 50 characters per line, or one million characters – is 1.95 X 10^1834097. Even if you managed to achieve exaflop speed on current hardware, that’s just 10^18. Then even bump that up by the trillion or so supposed by quantum comp, your machine is still an insignificant drop in the bucket at 10^33. Forget a whole book, just try one page of text, and you’d still be waiting around for myriad heat deaths of the universe to even scratch the surface; you’d need 10^200+ to get anywhere near searching the space. The Doom I engine was around 2,000,000 characters, and to find it you’d have to search another million magnitudes more than the Library of Babel. Kurzweil estimates a human brain could be simulated in 25,000,000 lines of code, and he’s pretty universally accepted to be a rose-tinted idealist. Good luck searching for e-brains with your typewriter monkeys.

“(Human creativity) is horribly inefficient, and results far too often in less than optimal solutions. In fact, it fails 99% of the time, partially succeeds .09% of the time, and only truly succeeds .01% of the time, if that.”

Depends on what you mean by inefficient. Most ideas that pop into your head during lunch break don’t pan out. 95% of all startups fail, but it’s the 5%, the ones that find the treatment for cancer or invent the automobile or make the next breakthrough in AI or crack the problem of relativity that make up for it. Thomas Edison successfully discovered 1000 ways not to make a lightbulb before he Let There Be Light with the flick of a switch. A computer brute-forcing the invention of a lightbulb would go through magnitudes more failures before coming up with a solution without the guidance of human creativity as there are nearly infinite possible ways to assemble matter. Similarly, an AI that had human-level creativity in limiting domains, applying high-level insight, and shortcutting, combined with the speed advantage would be far more efficient than an AI lacking it. (How many brute-search AIs does it take to screw in a lightbulb? Possibly a Googol-plex)

So yeah, computing power can help you to an extent, but I think you’re not really going to get where we want to go – that is, intelligence explosion – without some higher-level intelligent behavior that can direct the search to a level of computational efficiency at minimum on par with what we humans are capable of. If we want super-AI, we may need even *more* creative machines than we are.

No comments:

Post a Comment