Egnor and dualism. Again...
Michael Egnor is again talking about the sufficiency of matter to cause conciousness. This time around he doesn't actually come out looking completely silly. In his latest writings, he proposes a test how one could determine whether or not mere matter could be sufficient to cause mind. Egnor says that if a computer were to pass the Turing test, then he would accept the sufficiency of matter to cause mind. As Egnor writes:
Alan Turing, in 1950, suggested a test for consciousness in a machine. In the Turing test, an investigator would interact with a person and a machine, but would be blinded as to which was which. If the investigator couldn’t tell which one was the person, and which was the machine, it is reasonable to conclude that the machine had a mind like the person. It would be reasonable to conclude that the machine was conscious.Unfortunately he seems to create a loop-hole for himself (if this should ever happen up to his standards) by invoking the Chinese Room thought experiment. Writes Egnor:
Imagine that P.Z. Myers went to China and got a job. His job is this: he sits in a room, and Chinese people pass questions, written on paper in Chinese, through a slot into the room. Myers, of course, doesn’t speak Chinese. Not a word. But he has a huge book, written entirely in Chinese, that contains every conceivable question, in Chinese, and a corresponding answer to each question, in Chinese. P.Z. just matches the characters in the submitted questions to the answers in the book, and passes the answers back through the slot.Egnor finishes with his piece-de-resistance:
In a very real sense, Myers would be just like a computer. He’s the processor, the Chinese book is the program, and questions and answers are the input and the output. And he’d pass the Turing test. A Chinese person outside of the room would conclude that Myers understood the questions, because he always gave appropriate answers. But Myers understands nothing of the questions or the answers. They’re in Chinese. Myers (the processor) merely had syntax, but he didn't have semantics. He didn't know the meaning of what he was doing. There’s no reason to think that syntax (a computer program) can give rise to semantics (meaning), and yet insight into meaning is a prerequisite for consciousness. The Chinese Room analogy is a serious problem for the view that A.I. is possible.
Seems like a catch-22 for materialists, doesn't it? Either mere matter is not sufficient to cause mind OR mere matter is sufficient to cause mind but while showing this, it is also proved that ID is true. It is a convincing argument - if you don't think it through.
But imagine that artificial intelligence could be created, and Searle is wrong. Imagine that teams of the best computer scientists, working day and night for decades, finally produced a computer that had an awareness of itself. A conscious computer, with a mind! So, finally, P.Z. Myers and I could agree on something. Myers would be right. If a computer had a mind, we could infer two things:
1) Matter is sufficient, as well as necessary, for the mind. The mind is an emergent property of matter.
2) The emergence of mind from matter requires intelligent design.
It’s not easy being a materialist.
#1: Egnor's claim is entirely negative. As he says, "If we can’t create A.I., my viewpoint would seem more credible". His null hypothesis is, therefore, that matter is not enough to cause mind even though there is no evidence what-so-ever that there are any disembodied minds out there.
#2: ID proponents are very fond of claiming that experiments in general, since they are intelligently designed, point to intelligent causes. Each and every A.I. experiment would be intelligently designed, no matter how trivial the input from any researchers was (yes, IDists like to point out the the chips in the computer was intelligently designed, after all). All Egnor is saying is that it is impossible, according to his standards, to use computers to to elucidate whether or not mind could arise in the absense of intelligence. So, if a machine was to become conscious, Egnor's seconds point above would be true by definition. Egnor is playing a silly "damned if you do, damned if you don't" game. Do the experiment and I win. Don't do the experiment and I win. Heads I win, tails you loose.
So, all Egnor has done is to say that even if he is wrong about the sufficiency of matter being able to cause mind, he is still right about intelligent design. Reminds me of something my brother used to say: "I'm always right and even if I'm not right, at least I'm not wrong."