Chinese room

Thought experiment on artificial intelligence by John Searle / From Wikipedia, the free encyclopedia

Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Chinese room?

Summarize this article for a 10 year old


The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding", or "consciousness",[lower-alpha 1] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980.[1] Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]

The argument is directed against the philosophical positions of functionalism and computationalism[4] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis:[lower-alpha 2] "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[lower-alpha 3]

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of "intelligent" behavior a machine can display.[5] The argument applies only to digital computers running programs and does not apply to machines in general.[6]

Oops something went wrong: