Philosophy of artificial intelligence

Overview of the philosophy of artificial intelligence / From Wikipedia, the free encyclopedia

Dear Wikiwand AI, let's keep it short by simply answering these key questions:

Can you list the top facts and stats about Philosophy of artificial intelligence?

Summarize this article for a 10 year old


The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science[1] that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will.[2][3] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers.[4] These factors contributed to the emergence of the philosophy of artificial intelligence.

The philosophy of artificial intelligence attempts to answer such questions as follows:[5]

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include some of the following:

  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.[6]
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."[7]
  • Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[8]
  • John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[9]
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."[10]

Oops something went wrong: