ePlaza Magazine
Submit news for free

OTHER VIEWS: Is road to AI a roundabout?

Andrew Moore, dean of computer science at Carnegie-Mellon University, said recently that researchers are giving up on the prospect of human-like artificial intelligence (AI).

What? In the middle of all this progress we’ve been making? Well, it turns out the progress the AI field has seen has been more about refining the techniques we’ve had for years, rather than discovering anything new.

This applies even to our most ambitious AI technologies.

Self-driving cars, for example, have been the buzz in the auto industry, and MIT researchers continue to make improvements, while Japan promises a self-driving car system by 2020. As for knowledge systems, they’re becoming more robust in every industry from the medical field to human resources, but there’s limits to what they can do.

There’s a huge gap between the popular public understanding of AI and what’s actually going on.

Pop culture, after all, is riddled with fantasies about human-like machines, be it the HAL9000 from 2001: A Space Odyssey, Ava from the more recent Ex Machina, or GlaDOS, the mechanical mentor from the video game Portal.

Not only are we led to believe AI leads to rebellious, super-intelligent machines with wills and desires of their own, but there’s a whole movement out there, including world-class entrepreneurs, insisting it’s right around the corner.

But it isn’t. To this date, not only do we not know how to make a computer reason like a human, but we also have no idea where to start. Instead, we’ve gotten better at simulating pseudo-thinking behavior thanks to Moore’s Law.

Gordon Moore, co-founder of Intel Corp., famously predicted that computer technology doubles in power every two years. And this is how we have gotten “Black Box AI,” the hottest new trend in “smart” machines.

This is the practical, ad-hoc approach to AI, in which we forget about creating an electronic brain that will laugh at jokes and cry at soap operas, instead focusing on throwing all of our processing power into solving real problems.

Human-like AI is the “neat” approach, which has evaded us so far; the “scruffy” approach just cares about results no matter how the computer gets them.

In Black Box AI, we abuse our processing speed so that the computer can find its own solution by trial-and-error.

Cioffi

It is like dropping the world’s fastest mouse into a maze and letting it crash all over until it finds a way out.

Or we use pure discovery, such as how Google’s AI DeepMind learned to walk: giving it a virtual arena and telling it to try moving everything in any pattern it can find until it discovers how to move from point A to point B.

In the case of self-driving cars, a union of both approaches is needed: a combination of heuristic rules and learning in a simulated environment. The process is known as “emergent learning.”

That video animation of DeepMind’s spastic flailing is the perfect illustration of why Black Box AI can only get us so far.

Carnegie-Mellon’s Mr. Moore says we’re not making any progress on the “neat” side of AI research, but things are going great for the “scruffy” DeepMinds of the world now.

That’s a difficult concept for layman — or even some experts — to grasp, because it requires deep understanding of both fields of computer science and neurology. For instance, some proponents of “neat” AI insist it’s a matter of hooking up enough computers in a neural net and allowing the computer to explore endlessly.

The problem is, humans don’t just think with electrical impulses traveling down neurons; we also have a chemical element involving neurotransmitters, as well as several appended organs such as the hippocampus and the amygdala whose function we barely understand.

That’s one inherent limitation of human-like AI: Before you can program a computer, you have to understand how it’s done yourself. And the human mind remains a mystery. We still don’t fully understand the process behind most of the brain’s diseases, how emotions drive us, what part of our personalities are from nature and what part nurture.

Turning an AI loose in a simulation and letting it discover everything from quantum physics to Italian cooking isn’t the answer either. Emergent learning techniques only work for specifically prescribed sets of problems.

Thus we might ask: Will AI replace doctors? Yes, AI will help diagnosis by being a fast pattern-matching search engine, but no, humans will still have to oversee them.

Will AI replace human resources recruiters? Probably not, because our best search algorithms already are deployed to scour resumes, and there isn’t much more you can do without a human pilot.

Self-driving cars will be a sure thing, even though the media is quick to alarm us about every accident they have. Emergent driving AI will learn from each mistake, while in the U.S. human drivers still cause more than 30,000 fatalities per year while doing the same stupid things. Perhaps not driving inebriated or distracted is one measure where AI is an improvement on humans already.

Demand will increase as more people get comfortable with a robot driver, and even a sloppy AI that has an accident once in awhile seems to beat the average human driver.

Yes, AI is in your future. It’s very much in your present every time you have a “conversation” with virtual assistants Alexa or Siri. But the human-like AI from science fiction books will have to remain, alas, or fortunately, a fantasy.

Mike Cioffi is founder of TireTalent.com, a boutique recruiting agency whose mission is to align top talent with top tire companies.

Comments are closed.