Monday, May 25, 2015

What does the Turing Test test?


Saw the movie Ex Machina. The outside shots, filmed in Valldalen, Norway, are are simply gorgeous. Good flick and provoked some ruminating (avoiding plot details).

There seems no a priori reason to suppose that machine intelligence cannot reach the point of passing the Turing test. A complex enough programed machine able to “learn” from extracting patterns from massive data and using them to interact with humans should be able to “exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” One can imagine such a machine as pictured in the movie.

But what does the Turing test really test. An “artificial intelligence” might be able to interpret and respond to the full range of human behavior and simulate the same. It might be able to “read” a conscious human better than an actual human might by picking up on subtle physical manifestations (as stored in its memory). With a large enough data base behind it and a multitude of “learned” behaviors it might convince a human that it was indeed intelligent and even self-aware. But would it be? Would the ability to simulate human behavior completely enough to appear human actually be human or entail consciousness? If programed with a sub-routine causing it to seek to persist (i.e., resist termination), would it be a self seeking self-preservation? Would programing allowing it to read human emotions and respond “appropriately” with simulated emotion mean it actually felt such emotions?

Would a machine intelligence able to simulate human behavior and emotions actually be able to love, hate, feel empathy and act with an awareness of itself and, perhaps more importantly, of an Other? Or might there still be something missing?

Smoked a cigar on my favorite bench while considering all this and watched some ants going about their business. Ants are extremely complex biological machines acting and reacting within their environment with purpose and an overall drive to self-perpetuate (both as individuals and as a collective). They may be conscious even if not self aware. Or is a certain basic self-awareness something that goes with being alive? Would even a very complex machine ever be alive even if very “intelligent?”

My guess is that machine intelligence – even if very complex and advanced and equipped with a self-referential sub-program allowing algorithmic analysis of itself – would not be conscious or alive. Thus not capable of emotion and therefore what we might call coldly rational. Is this why Bill Gates, Stephen Hawking and others are concerned about AI?

No comments: