Monday, February 22, 2010

The flaw of the Chinese Room argument is its greatest asset

Note: The following is a response I wrote recently to John Searle's thought experiment devised to explore the limits of machine knowledge.

Searle's thought experiment (i) begins by asking us to imagine a computer program that has passed the Turing test (ii) as administered in Chinese. The computer program, he reasons, must succeed by manipulating symbols within a formal system. Searle goes on to suggest that the algorithmic manipulations alone do not constitute understanding since if they had been done by a man, who had no understanding of Chinese, we would not conclude that he gained an understanding of Chinese. Likewise, we must conclude that the computer does not either.

The critical flaw in Searle's argument lies in one's inability to accept the premise of the thought experiment. To appraise his argument, we must first conceive of its premises. Namely, it must be possible to consider a situation, in which, a computer has passed the Turing test, as originally proposed. This is quite a stretch of the imagination as it would require us to have some concept of the inner-workings of such a program and given the present state of research, it is impossible to predict how a computer/program might be developed so that it could pass the Turing test.

However, this flaw is the greatest achievement of his argument, in my opinion, since it has generated an immense body of outstanding work, which includes a number of excellent attacks and rebuttals (iii), about the real problems that confront and confound the goals of artificial intelligence research. Importantly, the solution to the problem at the heart of Searle's argument implicitly outlines the goals of future artificial intelligence research.

It is interesting to note that our inability to 1) accept the premise of the argument on philosophical grounds, and 2) create a deterministic algorithm that passes the Turing test, can be attributed, albeit for different reasons, to a concept called indeterminism or I prefer "underdeterminism". This observation clearly implicates underdeterminism as a critical subject for future research in the field of intelligence research.

The history of determinism and underdeterminism

Traditionally, in a philosophical debate, the diametric opposite of determinism is free will. I argue here that the appropriate diametric opposite is underdeterminism. The appearance of free will is a consequence of certain underdetermined systems, but that demonstration is beyond the scope of this essay.

A system is said to be determined if given an exactly specified initial state, one and only one unique new state evolves for every state change within the system. Two congruent systems with the same boundary conditions would remain congruent at all times in a determined system; the two systems would diverge over time for an underdetermined system.

In a deterministic universe, it would be possible to completely predict the behavior of everything within it. The being capable of such a feat is sometimes called Laplace's demon (iv), initially formally proposed by the causal determinist, Pierre Simon Laplace. The possible existence of such a demon has been argued against quite convincingly since its conjuring in 1814. These arguments appeal to the indeterministic nature of the universe (v), Cantor diagonalization proofs (vi), and thermodynamic irreversibility (vii).

Underdeterminism: our inability to predict the next major breakthrough

Because of the indeterminate nature of the universe, we cannot accept the premise of Searle's argument. Karl Popper once noted that "if there is such a thing as growing human knowledge, then we cannot anticipate today what we shall know only tomorrow." (viii) This observation epitomizes the problem of forecasting the future and constitutes the reason for my philosophical objection to the premise of Searle's argument. If we could accept the premise of his argument, it would mean we could anticipate the discovery of the computing system capable of passing the Turing test, and we should, in principle, be able to design it.

Underdeterminism: the key difference between a computer program and a human

The concept of underdeterminism has a dual-purpose in this context. Searle contends that understanding is not something that one could ever attribute to a program. While he does admit that all humans are, at base, digital computers, and a computing machine, in principle, could achieve understanding since he grants "understanding" to humans, he argues that adhering to a program alone cannot be sufficient for understanding.

I would argue that the key difference between a traditionally-programmed "digital computer" and a human is that in the case of the programmed computer, the evolution of the computer-state is determined while the evolution of the brain-state is not. It follows logically that were we able to design a computer in such a way that it was (under)determined in exactly the way humans are, we should grant that the program-computer complex can understand, to the extent that a human understands, since there would be no salient difference between the two. Until we know the exact algorithm employed, it is impossible to speculate about whether that machine/program understands (or thinks it understands) anything.

Future research of intelligent systems

Underdeterminism is the root of understanding. I propose the following thought experiment to elucidate this point:

Beginning with Searle's thought experiment, we now add that the instructions consist of a neural map of a Chinese man's brain and a movie of every single firing that occurs. It is the job of the room's inhabitant (Chinese or English) to reconstruct every neural calculation and link those up via an output table, which maps neuronal firing patterns to Chinese symbols. While this man would pass the Turing test, I do not believe that Searle would grant understanding to him. But it does not follow that this is because of a fault of the algorithm per se, for he would be willing to grant that the original Chinese speaker, who used a similar algorithm (e.g. actual neurons firing), did understand. The problem is the blind reconstruction.

What is missing is exactly the part of the experience that cannot be codified algorithmically, and which part is, by definition, the non-deterministic nature of the original Chinese speaker. If possible, the exact experiment administered twice to hypothetical, exactly-identical twins would result in a completely different map of neuron's firing and two distinct real-time movies. This indeterminism is responsible for our "spontaneity" and "originality" and is a prerequisite for understanding.

An appropriate model of the nature of this indeterminism is where the bulk of the future research effort should be placed. Is the indeterminism required for understanding related to an ontological, Heisenberg uncertainty, or is it due to an epistemological uncertainty related to the brain's complex wiring, its analogue-biological components, the myriad environmental variables, or a brand of incompleteness via Gödel? (ix) And, could these considerations be sufficiently-well captured via systems that incorporate stochastic variables, quantum computations, or massive-parallelism, for example?

While underdeterminism seems to be a prerequisite for understanding, a capacity of self-delusion (likely via pattern recognition applied to the process itself) should not be ignored. That is, in an indeterminate universe, complete contextualization is not possible and an intelligent system must seek contextualization and a satisficing (x) sort of self-delusion likely via post hoc rationalization from pattern recognition algorithms. Would it not be an act of self-delusion to suppose that such a capacity were not necessary?

Conclusions

The great achievement of Searle's argument is that it has cast the problems of intelligence research in a new light, engendered healthy debate and illuminated the way forward for the study of natural and artificial intelligence.

Until we have a better idea of what we mean by human intelligence, we cannot hope to model it with a computer. Searle's argument challenges us to conceive of an appropriate definition of this term. If we begin by assuming we, as humans, are underdetermined yet contingent calculators with many innate underdetermined pattern-recognition algorithms, which work concertedly to contextualize various stimuli in our environment, we may be able to better address the gap between our conception of the currently possible algorithms and those potential algorithms that will one day allow a machine to understand; for they certainly exist within us.


References

i Searle, J.R. (1980) 'Minds, Brains and Programs' Behavioral and Brain Sciences 3: 417–457.

ii Turing, A.M. (1950) 'Computing machinery and intelligence' Mind 59: 433-460.

iii Preston, J. and Bishop, M. (2002) Views into the Chinese Room (Oxford: Oxford University Press).

iv Laplace, P.S. (1814) Théorie analytique des probabilités (Paris).

v Popper, K.R. (1950) 'Indeterminism in quantum physics and in classical physics' The British Journal for the Philosophy of Science 1: 117-133 and 173-195.

vi Wolpert, D.H. (2008) 'Physical limits of inference' Physica D 237: 1257–1281.

vii Ulanowicz, R.E. (1997) Ecology, the ascendant perspective (New York: Columbia University Press).

viii Popper, K.R. (1957) The Poverty of Historicism (New York: Routledge & Kegan Paul). The proof of this statement can be found in footnote "v".

ix Gödel, K. (1992) On formally undecidable propositions of Principia Mathematica and related systems (New York: Basic Books).

x Simon, H.A. (1956) 'Rational choice and the structure of environment' Psychological Review 63: 129-138.

3 comments:

  1. I think the Turing test is conceptually flawed. Bird calls pass the bird's TT, My 2 yr old grandniece converses with recorded phone messages. Clever programs mimic therapists and paranoiacs. Was Turing, are you or I, so intelligent as to be un-foolable? George

    ReplyDelete
  2. Another thought: diagonalization strikes! Would a program which has passed the TT be thereby competant to administer the TT?

    ReplyDelete
  3. Great thoughts George. Thanks for the comments. The Turing test as originally posed in the reference (Turing, 1950) includes an excellent question and answer section that I think pertains to, the spirit at least of, your comments. I encourage you to go check out the original reference if you haven't done so already.

    ReplyDelete