Philosophy of artificial intelligence 2958015 225668096 2008-07-14T20:34:13Z Coffee2theorems 1847344 /* Responses to the Chinese Room */ use the standard capitalization {{portalpar|Artificial intelligence}} {{portalpar|Mind and Brain}} {{See also|ethics of artificial intelligence}} The '''philosophy of artificial intelligence''' considers the relationship between ''machines'' and ''thought'' and attempts to answer such question as:<ref>{{Harvnb|Russell|Norvig|2003|p=947}} define the philosophy of AI as consisting of the first two questions, and the additional question of the [[ethics of artificial intelligence]]. {{Harvnb|Fearn|2007|p=55}} writes "In the current literature, philosophy has to chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.</ref> * Can a machine act intelligently? Can it solve ''any'' problem that a person would solve by thinking? * Can a machine have a [[philosophy of mind|mind]], mental states and [[consciousness]] in the same sense humans do? Can it ''feel''? * Are human intelligence and machine intelligence the same? Is the human brain essentially a computer? These three questions reflect the divergent interests of [[artificial intelligence|AI researchers]], [[philosophy|philosopher]]s and [[cognitive science|cognitive scientists]] respectively. The answers to these questions depend on how one defines "intelligence" or "consciousness" and exactly which "machines" are under discussion. Important [[proposition]]s in the philosophy of AI include: <blockquote class="toccolours" style="float:none; padding: 10px 15px 10px 15px; display:table;"> *[[Turing Test|Turing's "polite convention"]]: ''If a machine acts as intelligently as a human being, then it is as intelligent as a human being.''<ref name=T>This is a paraphrase of the essential point of the [[Turing Test]]. {{Harvnb|Turing|1950}}, {{Harvnb|Haugeland|1985|pp=6-9}}, {{Harvnb|Crevier|1993|p=24}}, {{Harvnb|Russell|Norvig|2003|pp=2-3 and 948}}</ref> * The [[Dartmouth Conferences|Dartmouth proposal]]: ''Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.''<ref name=MMRS>This assertion was printed in the program for the [[Dartmouth Conferences|Dartmouth Conference]] of 1956, widely considered the "birth of AI." {{Harvnb|McCarthy|Minsky|Rochester|Shannon|1955}} See also {{Harvnb|Crevier|1993|p=28}}</ref> * [[Alan Newell|Newell]] and [[Herbert Simon|Simon]]'s [[physical symbol system|physical symbol system hypothesis]]: ''A physical symbol system has the necessary and sufficient means of general intelligent action.''<ref name=NS>{{Harvnb|Newell|Simon|1963}} and {{Harvnb|Russell|Norvig|2003|p=18}}</ref> * [[John Searle|Searle]]'s weak AI: ''A physical symbol system can act intelligently.''<ref name=SWAI>{{Harvnb|Searle|1980}}. See also {{Harvtxt|Russell|Norvig|2003|p=947}}: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis," although Searle's [[argument]]s, such as the [[Chinese Room]], apply only to [[physical symbol system]]s, not to machines in general (he would consider the brain a machine). Also, notice that the positions as Searle states them don't make any commitment to how ''much'' intelligence the system has: it is one thing to say a machine can act intelligently, it is another to say it can act as intelligently as a human being.</ref> * [[John Searle|Searle]]'s strong AI: ''A physical symbol system can have a mind and mental states.''<ref name=SWAI/> * [[Hobbes]]' [[mechanism]]: ''Reason is nothing but reckoning''.<ref name=H>{{Harvnb|Hobbes|1651|loc=chpt. 5}}</ref> </blockquote> == Can a machine display general intelligence? == Is it possible to create a machine that can solve ''all'' the problems humans solve using their intelligence? This is the question that AI researchers are most interested in answering. It defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the ''behavior'' of machines and ignores the issues of interest to [[psychology|psychologists]], [[cognitive science|cognitive scientist]]s and [[philosophy|philosophers]]; to answer this question, it doesn't matter whether a machine is ''really'' thinking (as a person thinks) or is just ''acting like'' it is thinking.<ref>See {{Harvnb|Russell|Norvig|2003|p=3}}, where they make the distinction between ''acting'' rationally and ''being'' rational, and define AI as the study of the former.</ref> The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the [[Dartmouth Conferences]] of 1956: * ''Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.''<ref name=MMRS/> Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet can't be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible. The first step to answering the question is to clearly define "intelligence." ===Intelligence=== ==== Turing test ==== {{Main|Turing test}} [[Alan Turing]], in a famous and seminal 1950 paper,<ref>{{Harvnb|Turing|1950}} and see {{Harvnb|Russell|Norvig|2003|p=948}}, where they call his paper "famous" and write "Turing examined a wide variety of possible objections to the possibility of intelligent machines, including virtually all of those that have been raised in the half century since his paper appeared."</ref> reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer ''any'' question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online [[chat room]], where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.<ref name=T/> Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks."<ref>{{Harvnb|Turing|1950}} under "The Argument from Consciousness"</ref> Turing's test extends this polite convention to machines: * ''If a machine acts as intelligently as human being, then it is as intelligent as a human being.'' ==== Human intelligence vs. intelligence in general ==== One criticism of the [[Turing test]] is that it is explicitly [[anthropomorphic]]. If our ultimate goal is to create machines that are ''more'' intelligent than people, why should we insist that our machines must closely ''resemble'' people? [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]] write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"<ref>{{Harvnb|Russell|Norvig|2003|p=3}}</ref> Recent AI research defines intelligence in terms of [[rational agent]]s or [[intelligent agent]]s. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.<ref>{{Harvnb|Russell|Norvig|2003|p=4-5, 32, 35, 36 and 56}}</ref> * ''If an agent acts so as maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.''<ref>Russell and Norvig would prefer the word "rational" to "intelligent".</ref> Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the [[Turing test]], they don't also test for human traits that we may not want to consider intelligent, like the ability to be insulted or the temptation to lie. They have the disadvantage that they fail to make the commonsense differentiation between "things that think" and "things that don't". By this definition, even a thermostat has a rudimentary intelligence. ===Arguments that a machine can display general intelligence=== ====The brain can be simulated==== {{Main|artificial brain}} [[Image:MRI.ogg|thumbtime=2|thumb|240px|An [[MRI]] scan of a normal adult human brain]] [[Marvin Minsky]] writes that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be able to reproduce the behavior of the nervous system with some physical device."<ref>{{Harvnb|Crevier|1993|p=125}}</ref> This argument, first introduced as early as 1943<ref>{{Harvnb|Pitts|McCullough|1943}}</ref> and vividly described by [[Hans Moravec]] in 1988,<ref>{{Harvnb|Moravec|1988}}</ref> is now associated with futurist [[Ray Kurzweil]], who estimates that computer power will be sufficient for a complete brain simulation by the year 2029.<ref>{{Harvnb|Kurzweil|2005|p=262}}. Also see {{Harvnb|Russell|Norvig|p=957}} and {{Harvnb|Crevier|1993|pp=271 and 279}}. The most extreme form of this argument (the brain replacement scenario) was put forward by [[Clark Glymour]] in the mid-70s and was touched on by [[Zenon Pylyshyn]] and [[John Searle]] in 1980</ref> Few disagree that a brain simulation is possible in theory, even critics of AI such as [[Hubert Dreyfus]] and [[John Searle]].<ref>[[Hubert Dreyfus]] writes: "In general, by accepting the fundamental assumptions that the nervous system is part of the physical world and that all physical processes can described in a mathematical formalism which can in turn be manipulated by a digital computer, one can arrive at the strong claim that the behavior which results from human 'information processing,' whether directly formalizable or not, can always be indirectly reproduced on a digital machine." {{Harv|Dreyfus|1972|pp=194-5}}. [[John Searle]] writes: "Could a man made machine think? Assuming it possible produce artificially a machine with a nervous system, ... the answer to the question seems to be obviously, yes ... Could a digital computer think? If by 'digital computer' you mean anything at all that has a level of description where it can be correctly described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think." {{Harv|Searle|1980|p=11}}</ref> However, Searle points out that, in principle, ''anything'' can be simulated by a computer, and so any process at all can be considered "computation", if you're willing to stretch the definition to the breaking point. "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes.<ref>{{Harvnb|Searle|1980|p=7}}</ref> Any argument that involves simply copying a brain is an argument that admits that we know nothing about how intelligence works. "If we had to know how the brain worked to do AI, we wouldn't bother with AI."<ref>{{Harvnb|Searle|1980|p=14}}</ref> <!--ARENT THERE SOME MORE DIRECT REFUTATIONS OF KURZWEIL?? --> ====Human thinking is symbol processing ==== {{Main article|physical symbol system}} In 1963, [[Alan Newell]] and [[Herbert Simon]] proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote: * ''A [[physical symbol system]] has the necessary and sufficient means of general intelligent action.''<ref name=NS/> This claim is very strong: it implies both that human thinking is a kind of symbol manipulation (because a symbol system is ''necessary'' for intelligence) and that machines can be intelligent (because a symbol system is ''sufficient'' for intelligence).<ref>[[John Searle|Searle]] writes "I like the straight forwardness of the claim." {{Harvnb|Searle|1980|p=4}}</ref> Another version of this position was described by philosopher [[Hubert Dreyfus]], who called it "the psychological assumption": * ''The mind can be viewed as a device operating on bits of information according to formal rules.''<ref>{{Harvnb|Dreyfus|1979|p=156}}</ref> A distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as <nowiki><dog></nowiki> and <nowiki><tail></nowiki> and the more complex "symbols" that are present in a machine like a [[neural network]]. Early research into AI, called "good old fashioned artificial intelligence" ([[GOFAI]]) by [[John Haugeland]], focussed on these kind of high level symbols.<ref>{{Harvnb|Haugeland|1985|p=5}}</ref> ====Arguments against symbol processing==== These arguments show that human thinking does not consist (solely) of high level symbol manipulation. They do ''not'' show that artificial intelligence is impossible, only that more than symbol processing is required. ===== Lucas, Penrose and Gödel ===== In 1931 [[Kurt Gödel]] proved that it is always possible to create [[proposition|statements]] that a [[formal system]] (such as an AI program) could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". This proved to philosopher [[John Lucas (philosopher)|John Lucas]] that human reason would always be superior to machines.<ref name=L>{{Harvnb|Lucas|1961}}, {{Harvnb|Russell|Norvig|2003|pp=949-950}}, {{Harvnb|Hofstadter|1979|pp=471-473,476-477}}, {{Harvnb|Turing|1950}} under “The Argument from Mathematics”</ref> He wrote "[[Gödel's incompleteness theorem|Gödel's theorem]] seems to me to prove that [[mechanism]] is false, that is, that minds cannot be explained as machines."<ref>{{Harvnb|Lucas|1961|p=57-9}}</ref> [[Roger Penrose]] expanded on this argument in his 1989 book [[The Emperor's New Mind]], where he speculated that [[quantum mechanical]] processes inside individual neurons gave humans this special advantage over machines.<ref>{{Harvnb|Penrose|1989}}</ref> [[Douglas Hofstadter]], in his [[Pulitzer prize]] winning book ''[[Gödel, Escher, Bach: An Eternal Golden Braid]],'' explains that these "Gödel-statements" always refer to the system itself, similar to the way the [[Epimenides paradox]] uses statements that refer to themselves, such as "this statement is false" or "I am lying".<ref>{{Harvnb|Hofstadter|1979}}</ref> But, of course, the [[Epimenides paradox]] applies to anything that makes statements, whether they are machines ''or'' humans, even Lucas himself. Consider: * ''Lucas can't assert the truth of this statement.''<ref>According to {{Harvnb|Hofstadter|1979|p=476-477}}, this statement was first proposed by [[C. H. Whitely]]</ref> This statement is true but can't be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so [[John Lucas (philosopher)|Lucas]]'s argument is pointless.<ref>{{Harvnb|Hofstadter|1979|pp=476-477}}, {{Harvnb|Russell|Norvig|2003|p=950}}, {{Harvnb|Turing|1950}} under “The Argument from Mathematics” where he writes “although it is established that there are limitations to the powers of any particular machine, it has only been stated, without sort of proof, that no such limitations apply to the human intellect.”</ref> Further, [[Stuart Russell|Russell]] and [[Peter Norvig|Norvig]] note that Gödel's argument only applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines (including humans) have finite resources and will have difficulty proving many theorems. It is not necessary to prove everything in order to be intelligent.<ref>{{Harvnb|Russell|Norvig|2003|p=950}}. They point out that real machines with finite memory can be modeled using [[first order logic]], which is formally [[decidable]], and Gödel's argument does not apply to them at all.</ref> ===== Dreyfus: the primacy of unconscious skills ===== {{Main|Dreyfus' critique of artificial intelligence}} [[Hubert Dreyfus]] argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation, and argued that these unconscious skills would never be captured in formal rules.<ref name=D>{{Harvnb|Dreyfus|1972}}, {{Harvnb|Dreyfus|1979}}, {{Harvnb|Dreyfus|Dreyfus|1986}}. See also {{Harvnb|Russell|Norvig|2003|pp=950-952}}, {{Harvnb|Crevier|1993|120-132}} and {{Harvnb|Hearn|2007|pp=50-51}}</ref> [[Hubert Dreyfus|Dreyfus]]'s argument had been anticipated by [[Alan Turing|Turing]] in his 1950 paper [[Computing machinery and intelligence]], where he had classified this as the "argument from the informality of behavior."<ref>{{Harvnb|Russell|Norvig|2003|p=950-51}}</ref> Turing argued in response that, just because we don't know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'" <ref>{{Harvnb|Turing|1950}} under "(8) The Argument from the Informality of Behavior"</ref> [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]] point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern unconscious reasoning.<ref>{{Harvnb|Russell|Norvig|2003|p=52}}</ref> The [[situated]] movement in [[robotics]] research attempts to capture our unconscious skills at perception and attention.<ref>See {{Harvnb|Brooks|1990}} and {{Harvnb|Moravec|1988}}</ref> [[Computational intelligence]] paradigms, such as [[neural net]]s, [[evolutionary algorithm]]s and so on are mostly directed at simulated unconscious reasoning and learning. Research into [[commonsense knowledge]] has focussed on reproducing the "background" or context of knowledge. In fact, AI research in general has moved away from high level symbol manipulation or "[[GOFAI]]", towards new models that are intended to capture more of our ''unconscious'' reasoning. Historian and AI researcher [[Daniel Crevier]] wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."<ref>{{Harvnb|Crevier|1993|p=125}}</ref> ==Can a machine have a mind, consciousness and mental states?==<!-- This title is linked to from [[Turing test]] --> This is a philosophical question, related to the [[problem of other minds]] and the [[hard problem of consciousness]]. The question revolves around a position defined by [[John Searle]] as "strong AI": * ''A physical symbol system can have a mind and mental states.''<ref name=SWAI/> Searle distinguished this position from what he called "weak AI": * ''A physical symbol system can act intelligently.''<ref name=SWAI/> [[John Searle|Searle]] introduced the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He argued that ''even if we assume'' that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.<ref name=SWAI/> Neither of Searle's two positions are of great concern to AI research, since they do not directly answer the question "can a machine display general intelligence?" (unless it can also be shown that consciousness is ''necessary'' for intelligence).<ref>There are a few researchers who believe that consciousness is an essential element in intelligence, such as [[Igor Aleksander]], [[Stan Franklin]], [[Ron Sun]] and [[Pentti Haikonen]], although their definition of "consciousness" strays very close to "intelligence." See [[artificial consciousness]].</ref> [[Alan Turing|Turing]] wrote "I do not wish to give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]."<ref name=T4>{{Harvnb|Turing|1950}} under “(4) The Argument from Consciousness”. See also {{Harvnb|Russell|Norvig|p=952-3}}, where they identify Searle's argument with Turing's "Argument from Consciousness."</ref> [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]] agree: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."<ref>{{Harvnb|Russell|Norvig|2003|p=947}}</ref> Before we can answer this question, we must be clear what we mean by "minds", "mental states" and "consciousness". === Consciousness, minds, mental states, meaning === [[Image:RobertFuddBewusstsein17Jh.png|thumb|Representation of consciousness from the 17th century.]] The words "[[mind]]" and "[[consciousness]]" are used by different communities in different ways. Some [[new age]] thinkers, for example, use the word "consciousness" to describe something similar to [[Bergson]]'s "[[élan vital]]": an invisible, energetic fluid that permeates life and especially the mind. [[Science fiction]] writers use the word to describe some [[essentialism|essential]] property that makes us human: a machine or alien that is "conscious" will be presented as a fully human character, with intelligence, desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience", "sapience," "self-awareness" or "ghost" (as in the ''[[Ghost in the Shell]]'' manga and anime series) to describe this essential human property.) For others, the words "mind" or "consciousness" are used as a kind of secular synonym for the [[soul]]. For [[philosophy|philosophers]], [[neuroscience|neuroscientists]] and [[cognitive science|cognitive scientists]], the words are used in a way that is both more precise and more mundane: they refer to the familiar, everyday experience of having a "thought in your head", like a perception, a dream, an intention or a plan, and to the way we ''know'' something, or ''mean'' something or ''understand'' something. "It's not hard to give a commonsense definition of consciousness" observes philosopher [[John Searle]].<ref>"[P]eople always tell me it was very hard to define consciousness, but I think if you're just looking for the kind of commonsense definition that you get at the beginning of the investigation, and not at the hard nosed scientific definition that comes at the end, it's not hard to give commonsense definition of consciousness." [http://www.abc.net.au/rn/philosopherszone/stories/2006/1639491.htm The Philosopher's Zone: The question of consciousness]. Also see {{Harvnb|Dennett|1991}}</ref> What is mysterious and fascinating is not so much ''what'' it is but ''how'' it is: how does a lump of fatty tissue and electricity give rise to this (familiar) experience of perceiving, meaning or thinking? [[Philosopher]]s call this the [[hard problem of consciousness]]. It is the latest version of a classic problem in the [[philosophy of mind]] called the "[[mind-body problem]]."<ref>{{Harvnb|Blackmore|2005|p=2}}</ref> A related problem is the problem of ''meaning'' or ''understanding'' (which philosophers call "[[intentionality]]"): what is the connection between our ''thoughts'' (i.e. patterns of neurons) and ''what we are thinking about'' (i.e. objects and situations out in the world)? A third issue is the problem of ''experience'' (or "[[phenomenology]]"): If two people see the same thing, do they have the same experience? Or are there things "inside their head" (called "[[qualia]]") that can be different from person to person?<ref>{{Harvnb|Russell|Norvig|2003|p=954-956}}</ref> [[Neurobiologist]]s believe all these problems will be solved as we begin to identify the [[neural correlates of consciousness]]: the actual machinery in our heads that creates the mind, experience and understanding. Even the harshest critics of [[artificial intelligence]] agree that the brain is just a machine, and that consciousness and intelligence are the result of a physical processes in the brain.<ref>For example, [[John Searle]] writes: "Can a machine think? The answer is, obvious, yes. We are precisely such machines." {{Harv|Searle|1980|p=11}}</ref> The difficult philosophical question is this: can a computer program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the ability of the [[neural correlates of consciousness|neurons]] to create [[mind]]s, with [[mental state (philosophy)|mental state]]s (like understanding or perceiving), and ultimately, the experience of [[consciousness]]? === Arguments that a computer can't have a mind and mental states === ==== Searle's Chinese room ==== {{Main|Chinese room}} [[John Searle]] asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing Test and demonstrates "general intelligent action." Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands Chinese? That is, is there anything that has the [[mental state (philosophy)|mental state]] of [[understanding]], or which has [[consciousness|conscious]] [[awareness]] of what is being discussed in Chinese? The man is clearly not aware. The room can't be aware. The ''cards'' certainly aren't aware. [[John Searle|Searle]] concludes that the [[Chinese room]], or ''any'' other [[physical symbol system]], cannot have a [[mind]].<ref>{{Harvnb|Searle|1980}}. See also {{Harvnb|Cole|2004}}, {{Harvnb|Russell|Norvig|2003|pp=958-960}}, {{Harvnb|Crevier|1993|pp=269-272}} and {{Harvnb|Hearn|2007|pp=43-50}}</ref> [[John Searle|Searle]] goes on to argue that actual [[mental state (philosophy)|mental state]]s and [[consciousness]] require (yet to be described) "actual physical-chemical properties of actual human brains."<ref>{{Harvnb|Searle|1980|p=13}}</ref> He argues there are special "causal properties" of [[brain]]s and [[neuron]]s that gives rise to [[mind]]s: in his words "brains cause minds."<ref>{{Harvnb|Searle|1984}}</ref> ==== Related arguments: Leibniz' mill, Block's telephone exchange and blockhead ==== [[Gottfried Leibniz]] made essentially the same argument as [[John Searle|Searle]] in 1714, using the thought experiment of expanding the brain until it was the size of a [[mill (factory)|mill]].<ref>{{Harvnb|Cole|2004|loc=2.1}}, {{Harvnb|Leibniz|1714|loc=17}}</ref> In 1974, [[Lawrence Davis]] imagined duplicating the brain using telephone lines and offices staffed by people, and in 1978 [[Ned Block]] envisioned the entire population of China involved in such a brain simulation. This thought experiment is called "the Chinese Nation" or "the Chinese Gym".<ref>{{Harvnb|Cole|2004|loc=2.3}}</ref> [[Ned Block]] also proposed his "[[blockhead]]" argument, which is a version of the [[Chinese room]] in which the program has been [[code refactoring|re-factored]] into a simple set of rules of the form "see this, do that", removing all mystery from the program. ==== Responses to the Chinese Room ==== Responses to the Chinese room emphasize several different points. #'''The systems reply''' and the '''virtual mind reply''':<ref>{{Harvnb|Searle|1980}} under "1. THe Systems Reply (Berkeley)", {{Harvnb|Crevier|1993|p=269}}, {{Harvnb|Russell|Norvig|2003|p=959}}, {{Harvnb|Cole|2004|loc=4.1}}. Among those who hold to the "system" position (according to Cole) are [[Ned Block]], [[Jack Copeland]], [[Daniel Dennett]], [[Jerry Fodor]], [[John Haugeland]], [[Ray Kurzweil]] and [[Georges Rey]]. Those who have defended the "virtual mind" reply include [[Marvin Minsky]], [[Alan Perlis]], [[David Chalmers]], [[Ned Block]] and J. Cole (again, according to {{Harvnb|Cole|2004}})</ref> This reply argues that ''the system'', including the man, the program, the room, and the cards, is what understands Chinese. Searle claims that the man in the room is the only thing which could possibly "have a mind" or "understand", but others disagree, arguing that it is possible for there to be ''two'' minds in the same physical place, similar to the way a computer can simultaneously "be" two machines at once: one physical (like a [[Macintosh]]) and one "[[virtual machine|virtual]]" (like a [[word processor]]). #'''Speed, power and complexity replies''':<ref>{{Harvnb|Cole|2004|loc=4.2}} ascribes this position to [[Ned Block]], [[Daniel Dennett]], [[Tim Maudlin]], [[David Chalmers]], [[Steven Pinker]], [[Patricia Churchland]] and others.</ref> Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt. #'''Robot reply''':<ref>{{Harvnb|Searle|1980}} under "2. The Robot Reply (Yale)". {{Harvnb|Cole|2004|loc=4.3}} ascribes this position to [[Margaret Boden]], [[Tim Crane]], [[Daniel Dennett]], [[Jerry Fodor]], [[Stevan Harnad]], [[Hans Moravec]] and [[Georges Rey]]</ref> To truly understand, some believe the Chinese Room needs eyes and hands. [[Hans Moravec]] writes: 'If we could graft a robot to a reasoning program, we wouldn't need a person to provide the meaning anymore: it would come from the physical world."<ref>Quoted in {{Harvnb|Crevier|1993|p=272}}</ref> #'''Brain simulator reply''':<ref>{{Harvnb|Searle|1980}} under "3. The Brain Simulator Reply (Berkeley and M.I.T.)" {{Harvnb|Cole|2004}} ascribes this position to [[Paul Churchland|Paul]] and [[Patricia Churchland]] and [[Ray Kurzweil]]</ref> What if the program simulates the sequence of nerve firings at the synapses of an actual brain of an actual Chinese speaker? The man in the room would be simulating an actual brain. This is a variation on the "systems reply" that appears more plausible because "the system" now clearly operates like a human brain, which strengthens the intuition that there is something besides the man in the room that could understand Chinese. #'''Other minds reply''' and the '''epiphenomena reply''':<ref>{{Harvnb|Searle|1980}} under "5. The Other Minds Reply", {{Harvnb|Cole|2004|loc=4.4}}. {{Harvnb|Turing|1950}} makes this reply under "(4) The Argument from Consciousness." Cole ascribes this position to [[Daniel Dennett]] and [[Hans Moravec]].</ref> Several people have noted that Searle's argument is just a version of the [[problem of other minds]], applied to machines. Since it's difficult to decide if people are "actually" thinking, we shouldn't be surprised that it's difficult to answer the same question about machines. A related idea is that Searle's "causal properties" of neurons are [[epiphenomenal]]: they have no effect on the real world. Why would natural selection create them in the first place, if they make no difference to behavior? ==Is thinking a kind of computation?== {{Main|computational theory of mind}} This issue is of primary importance to [[Cognitive science|cognitive scientists]], who study the nature of human thinking and problem solving. The [[computational theory of mind]] or "[[computationalism]]" claims that the relationship between mind and body is similar (if not identical to) the relationship between a ''running program'' and a computer. The idea has philosophical roots in [[Hobbes]] (who claimed reasoning was "nothing more than reckoning"), [[Leibniz]] (who attempted to create a logical calculus of all human ideas), [[Hume]] (who thought perception could be reduced to "atomic impressions") and even [[Kant]] (who analyzed all experience as controlled by formal rules).<ref>{{Harvnb|Dreyfus|1979|p=156}}, {{Harvnb|Haugeland|pp=15-44}}</ref> The latest version is associated with philosophers [[Hilary Putnam]] and [[Jerry Fodor]].<ref>{{Harvnb|Horst|2005}}</ref> This question bears on our earlier questions: if the human brain is a kind of computer then computers can be both intelligent and conscious, answering both the practical and philosophical questions of AI. In terms of the practical question of AI ("Can a machine display general intelligence?"), some versions of [[computationalism]] make the claim that (as [[Hobbes]] wrote): * ''Reasoning is nothing but reckoning''<ref name=H/> In other words, our intelligence derives from a form of ''calculation'', similar to [[arithmetic]]. This is the [[physical symbol system]] hypothesis discussed above, and it implies that artificial intelligence is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental states and consciousness?"), most versions of [[computationalism]] claim that (as [[Stevan Harnad]] characterizes it): * ''Mental states are just implementations of (the right) computer programs''<ref name=HARNAD>{{Harvnb|Harnad|2001}}</ref> This is [[John Searle]]'s "strong AI" discussed above, and it is the real target of the [[Chinese Room]] argument (according to [[Stevan Harnad|Harnad]]).<ref name=HARNAD/> == Other related questions == [[Alan Turing]] noted that there are many arguments of the form "a machine will never do X", where X can be many things, such as: <blockquote>Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.<ref name=T5>{{Harvnb|Turing|1950}} under "(5) Arguments from Various Disabilities"</ref></blockquote> Turing argues that these objections are often based on naive assumptions about the versatility of machines or are "disguised forms of the argument from consciousness". Writing a program that exhibits one of these behaviors "will not make much of an impression."<ref name=T5/> All of these arguments are tangential to the basic premise of AI, unless it can be shown that one of these traits is essential for general intelligence. === Can a machine have emotions? === [[Hans Moravec]] believes "I think robots in general will be quite emotional about being nice people"<ref name=CQ266>Quoted in {{Harvnb|Crevier|1993|p=266}}</ref> and describes emotions in terms of the behaviors they cause. Fear is a source of urgency. Empathy is a necessary component of good [[human computer interaction]]. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this positive reinforcement. You can interpret this as a kind of love."<ref name=CQ266/> [[Daniel Crevier]] writes "Moravec's point is that emotions are just devices for channeling behavior in a direction beneficial to the survival of one's species."<ref>{{Harvnb|Crevier|1993|p=266}}</ref> The question of whether the machine ''actually feels'' an emotion, or whether it merely acts as if feeling an emotion is the philosophical question, "can a machine be conscious?" in another form.<ref name=T4/> === Can a machine be self aware? === "Self awareness", as noted above, is sometimes used by [[science fiction]] writers as a name for the [[essentialism|essential]] human property that makes a character fully human. [[Alan Turing|Turing]] strips away all other properties of human beings and reduces the question to "can a machine be the subject of its own thought?" Can it ''think about itself''? Viewed in this way, it is obvious that a program can be written that can report on its own internal states, such as a [[debugger]].<ref name=T5/> === Can a machine be original or creative? === Turing reduces this to the question of whether a machine can “take us by surprise" and argues that this is obviously true, as any programmer can attest.<ref>{{Harvnb|Turing|1950}} under "(6) Lady Lovelace's Objection"</ref> He notes that, with enough storage capacity, a computer can behave in an astronomical number of different ways.<ref>{{Harvnb|Turing|1950}} under "(5) Argument from Various Disabilities"</ref> It must be possible, even trivial, for a computer that can represent ideas to combine them in new ways. ([[Douglas Lenat]]'s [[Automated Mathematician]], as one example, combined ideas to discover new mathematical truths.) === Can a machine have a soul? === Finally, those who believe in the existence of a soul would argue that * ''Thinking is a function of man’s immortal soul'' [[Alan Turing]] called this “the theological objection” and writes <blockquote>In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that he creates.<ref>{{Harvnb|Turing|1950}} under "(1) The Theological Objection”, although it should be noted that he also writes “I am not very impressed with theological arguments whatever they may be used to support”</ref></blockquote> == See also == {{MultiCol}} *[[Artificial intelligence]] *[[Philosophy of information]] *[[Philosophy of mind]] *[[Brain#Other matters|Brain (other matters section)]] *[[Computational theory of mind]] *[[Functionalism]] {{ColBreak}} *[[Turing Test]] *[[Artificial brain]] *[[Physical symbol system]] *[[Dreyfus' critique of artificial intelligence]] *[[Chinese room]] *[[Computing Machinery and Intelligence]] {{EndMultiCol}} == Notes == {{reflist}} == References == * {{Citation | last=Blackmore | first=Susan | year=2005| title=Consciousness: A Very Short Introduction | publisher=Oxford University Press |author-link=Susan Blackmore }} * {{Citation | last=Brooks | first=Rodney | author-link=Rodney Brooks | year =1990 | title = Elephants Don't Play Chess | journal = Robotics and Autonomous Systems | volume=6 | pages=3-15 | url=http://people.csail.mit.edu/brooks/papers/elephants.pdf | accessdate=2007-08-30}} * {{Citation | last=Cole | first=David | year=2004| contribution=The Chinese Room Argument | title=The Stanford Encyclopedia of Philosophy | date = Fall 2004 | editor-first=Edward N. | editor-last = Zalta | url=http://plato.stanford.edu/archives/fall2004/entries/chinese-room/ }}. * {{Crevier 1993}} * {{Citation | last=Dennett | first=Daniel | author-link=Daniel Dennett | year=1991 | title=[[Consciousness Explained]] | publisher=The Penguin Press | isbn= 0-7139-9037-6}} * {{Citation | last=Dreyfus | first=Hubert | year =1972 | title = [[What Computers Can't Do]] | publisher = MIT Press | location = New York | authorlink = Hubert Dreyfus | isbn = 0060110821 }} * {{Citation | last=Dreyfus | first=Hubert | year =1979 | title = What Computers ''Still'' Can't Do | publisher = MIT Press | location = New York | authorlink = Hubert Dreyfus }}. * {{Citation | last=Dreyfus | first=Hubert | last2 = Dreyfus | first2 = Stuart | year = 1986 | title = Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer | publisher = Blackwell | location = Oxford, UK | authorlink = Hubert Dreyfus }} * {{Citation | last=Fearn | first = Nicholas | year =2007 | title= The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers | publisher = Grove Press | location=New York }} * {{Citation | last=Gladwell | first=Malcolm | title=[[Blink (book)|Blink: The Power of Thinking Without Thinking]]| location=Boston | publisher=Little, Brown | year=2005 | isbn= 0-316-17232-4 |authorlink= Malcolm Gladwell}}. * {{Citation | last=Harnad | first = Stevan | year=2001| contribution=What's Wrong and Right About Searle's Chinese Room Argument? | editor-first=M. | editor-last = Bishop | editor2-first = J. | editor2-last = Preston | title = Essays on Searle's Chinese Room Argument | publisher = Oxford University Press | url=http://cogprints.org/4023/1/searlbook.htm | author-link = Stevan Harnad }} * {{Citation | last=Hobbes | title = [[Leviathan]] | year = 1651 |author-link=Hobbes}}. * {{Citation | last=Hofstadter | first = Douglas | title = [[Gödel, Escher, Bach|Gödel, Escher, Bach: an Eternal Golden Braid]] | year = 1979 | author-link = Douglas Hofstadter }}. * {{Citation | last=Horst | first= Steven | year = 2005| contribution =The Computational Theory of Mind | title= The Stanford Encyclopedia of Philosophy | date = Fall 2005 | editor-first = Edward N. | editor-last = Zalta | url = http://plato.stanford.edu/archives/fall2005/entries/computational-mind/ }}. * {{Citation | last=Kurzweil | first = Ray | title = [[The Singularity is Near]] | year = 2005 | publisher = Viking Press | location = New York | authorlink = Ray Kurzweil | isbn=0-670-03384-7}}. * {{Citation | last=Lucas | first=John| year = 1961 | contribution=Minds, Machines and Gödel | editor-last =Anderson |editor-first =A.R. | title=Minds and Machines | url = http://users.ox.ac.uk/~jrlucas/Godel/mmg.html |author-link = John Lucas (philosopher)}}. * {{Citation | last=McCarthy | first=John | last2 = Minsky | first2 = Marvin | last3 = Rochester | first3 = Nathan | last4 = Shannon | first4 = Claude | url = http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html | title = A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence | year = 1955 | author-link = John McCarthy (computer scientist) | author2-link = Marvin Minsky | author3-link = Nathan Rochester | author4-link = Claude Shannon}}. * {{Citation | last=McDermott | first=Drew | year = 1997 | title = How Intelligent is Deep Blue | url=http://www.psych.utoronto.ca/~reingold/courses/ai/cache/mcdermott.html | newspaper= New York Times |date=May 14, 1997}} * {{Citation | last=Moravec | first=Hans | year = 1988 | title = Mind Children | publisher = Harvard University Press | author-link =Hans Moravec }} * {{Citation | last=Newell | first=Allen | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill|publisher-place= New York | author-link=Allen Newell | author-link2 = Herbert Simon}} * {{Russell Norvig 2003}} * {{Citation | last=Penrose |first=Roger||title= [[The Emperor's New Mind|The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics]]|publisher= Oxford University Press| year=1989|isbn=0-14-014534-6 |author-link=Roger Penrose}} * {{Citation | last=Searle | first=John | year=1980 | url = http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html | title = Minds, Brains and Programs | journal = Behavioral and Brain Sciences | volume = 3| issue = 3| pages= 417-457 | author-link=John Searle}} * {{Citation | last=Searle | first=John | year=1992 | title=The Rediscovery of the the Mind|publisher=M.I.T. Press|location=Cambridge, Massachusetts}} * {{Citation | last=Turing | first = Alan | year=1950 | title = [[Computing machinery and intelligence]] | journal=Mind | issn=0026-4423 | volume = LIX | issue = 236 |date=October 1950 | pages= 433-460 | url =http://loebner.net/Prizef/TuringArticle.html | authorlink = Alan Turing | doi=10.1093/mind/LIX.236.433}} ==External links== *[http://www.shawnkilmer.com/?p=92 Research Paper: Philosophy of Consciousness and Ethics In Artificial Intelligence] [[Category:Philosophy of artificial intelligence| ]] [[fa:فلسفه هوش مصنوعی]]