The most interesting "dare" thus far was presented by anonymous commenter "A":

Find the flaws in the Chinese Room argument against AI, and explain them to normal folk.
The Chinese Room is a thought experiment first proposed by John Searle in 1980 that attempts to illustrate that Strong AI will never be accomplished by mere symbol manipulation. Strong AI claims that a computer can be programmed in such a way that it will be a mind, capable of conscious thought and comprehension. Weak AI says that, although computers can be useful tools for understanding minds, no computer (as currently conceived) will ever achieve real intelligence.

The thought experiment is as follows:

In the Chinese room thought experiment, a person who understands no Chinese sits in a room into which written Chinese characters are passed. In the room there is also a book containing a complex set of rules (established ahead of time) to manipulate these characters, and pass other characters out of the room. This would be done on a rote basis, eg. "When you see character X, write character Y". The idea is that a Chinese-speaking interviewer would pass questions written in Chinese into the room, and the corresponding answers would come out of the room appearing from the outside as if there were a native Chinese speaker in the room.

It is Searle's belief that such a system could indeed pass a Turing Test, yet the person who manipulated the symbols would obviously not understand Chinese any better than he did before entering the room.

The man in the room represents a computer following a set of rules (it's "program"). Even though he can generate outputs that appear to reflect comprehension, all he's doing is following rules and manipulating symbols.

I subscribe to the weak AI position, but as the Wikipedia article notes near the end, the actual truth may not be important. If a computer appears intelligent to an observer, does it matter if it really is or not? Perhaps only when considering moral and ethical questions, such as whether or not an apparently-intelligent computer can be "murdered". The problem becomes philisophical rather than scientific; I could argue that there's no way for me to know that you are actually intelligent, other than by observing you.

This "other minds" objection certainly doesn't refute the argument (it doesn't matter how I know you're really intelligent or not), but it may make the argument scientifically unimportant. We have no way to make decisions other than by making observations, and if we can't observationally distinguish between real and artificial intelligence then there's no science to be done. (Of course, if we can visually inspect the "brain" of the being in question we could decide easily.)

Searle refutes many other common objections in his own writing, and I suggest that you don't send additional objections to me until you understand his arguments -- it's likely that he's covered your objections already. The point is this: weak AI means that computer scientists will always know which intelligences are genuine and which are not, even if the general public can't tell the difference.

Miss "A" asks me to "find the flaws" in the argument, and I have only one to offer that hasn't been covered extensively elsewhere (to my knowledge, and it makes the argument stronger, not weaker). Searle assumes the existence of something I don't think is possible: a set of rules that the man in the room could follow that would allow him to appear to understand Chinese to outside observers.

Human language can be used to construct questions that aren't recursively enumerable, which means it may take an infinitely long time to find an answer, if one exists. For example:

Given a program and input parameters, will that program run forever?
That question is undecidable, because for the input programs that will run forever the respondent will have to wait that long before knowing the answer. Somehow humans identify such situations and behave appropriately, approximating and guessing and making-do with rough estimates. A computer could "guess" also, but how would it know when to do so? That problem itself is undecidable.

It's certain that, even with an infinite rulebook, the outside observers would eventually come up with a question that would take the man in the room an infinite amount of time to answer. If the man understood Chinese he would be able to respond with an appropriate guess, but since he must rely solely on symbol manipulation he can only "guess" by replying with a random symbol, thereby revealing his lack of comprehension.

This whole discussion is vastly oversimplified, but "A" wanted me to try to make it accessible to "normal folk" so I did the best I could. For more information you can read the Wikipedia entries I linked to above -- they're mostly accurate.

1 TrackBacks

Listed below are links to blogs that reference this entry: The Chinese Room Argument.

TrackBack URL for this entry: https://www.mwilliams.info/mt5/tb-confess.cgi/2727

» Hot Dang!! from e-Claire

Michael Williams, Master of None, is certainly master of the dare. In response to my challenge, he has answered the first of three challenging theses: The Chinese Room Argument. Michael, you taught me something and cost me $10 to the... Read More

Comments

Supporters

Email blogmasterofnoneATgmailDOTcom for text link and key word rates.

Site Info

Support