Tuesday, July 18, 2006

Complexity and AI

As these two recent articles—“AI Reaches the Golden Years” and “Brainy Robots Start Stepping Into Daily Life”—suggest, there is currently quite a bit of interest in the development of artificial intelligence. How to implement true machine intelligence, though, is still an open question.

The Wired article points out a problem AI has had since its inception: how to deal with ‘common sense.’ Even though computers like Deep Blue rock at chess, because of the complexity involved in modeling this kind of fuzzy knowledge, programming machines to do what are considered to be normal, everyday tasks is extremely daunting.

deep blue

Deep Blue: “Bring it on”


In his 1994 book Complexification: Explaining a Paradoxical World Through a Science of Surprise, John L. Casti points out this difficulty with what he calls “top-down” AI models; that is, models that attempt to program in all the environmental factors that will affect the AI. This is the method that has a hard time with mundane tasks. Another, more profitable, method appears to be modeling systems from the “bottom-up,” that is, mimicking the process of the brain and allowing complex behavior to originate from those interactions.

This method, too, has its difficulties. Casti notes, and plieb has pointed out, that there is good reason to believe that to actually produce brain-like activity, a device “must also share the size, connective structure and initial configuration of the brain” (160). Though such a thing may be possible, Gödel has suggested that if it were to evolve, it would be too complex for us to understand, much as our own brains’ functions remain a mystery to us (Casti 167).

Gosper Glider gun

Gödel’s solution is supported by attempts at creating A-life via cellular automata (CA) like Conway’s Game of Life (an example of a “glider gun” CA from the Game of Life is pictured above_. Following Steen Rasmussen’s rules for what A-life must look like:

Postulate 1: A universal Turing machine can simulate any physical process.

Postulate 2: Life is a physical process.

Postulate 3: There are criteria by which we can distinguish between living and nonliving systems.

Postulate 4: An artificial organism must perceive a reality R*, which for it is just as real as the “real” reality R is for us.

Postulate 5: The realities R* and R have the same ontological status.

Postulate 6: We can learn about the fundamental properties of our reality R by studying the details of different R*s. (Casti 168-69)

A Game of Life board that could execute such a CA would be roughly 3 square kilometers in size (Casti 228).

These results to not mean that artificial life is out of the question. Casti suggests that the correct response to Gödel’s statement would not be to build AI, but to “grow” it using a bottom-up approach. This would allow for simple processes to form an aggregate—the final form of which, as noted above, would be too complex for us to completely understand—that would exhibit life-like behavior. This system would exhibit the properties noted by Stuart Kauffman in cellular interactions; that the “unimaginably complicated network of interactions” occurring in the cell don’t “lead to utter chaos, but rather results in the cell organizing itself into stable patterns of activity appropriate for its particular function in the organism” (Casti 267). Taking advantage of this spontaneous order is what the NYT article calls “cognitive computing” and falls under the heading of complexity. If such life is ever achieved, perhaps it will be Gödel’s standard—that we can’t understand it—that will validate it as being “alive,” rather than any other arbitrary list of behaviors.

No comments: