{"id":4466,"date":"2018-10-07T00:52:38","date_gmt":"2018-10-07T04:52:38","guid":{"rendered":"https:\/\/www.amyork.ca\/academic\/zz\/?p=4466"},"modified":"2018-10-07T01:11:29","modified_gmt":"2018-10-07T05:11:29","slug":"minds-brains-and-programs-john-serle","status":"publish","type":"post","link":"https:\/\/www.amyork.ca\/academic\/zz\/cognitive-psychology\/minds-brains-and-programs-john-serle\/","title":{"rendered":"Minds, Brains and Programs (John Serle)"},"content":{"rendered":"
-the principle value of the computer in the study of the mind is that it gives us a very powerful tool<\/p>\n
-ex: enables us to formulate and test hypotheses in a precise fashion<\/p>\n
-The computer is not merely a tool in the study of the mind<\/p>\n
-The appropriate programmed computer really IS a mind and has cognitive states<\/p>\n
-The programs are themselves the explanations, not the tools to find the explanations<\/p>\n
-the aim of the program is to simulate the human ability to understand stories<\/p>\n
-i.e. if asked about restaurants, the program can look through all possible representations about restaurants and answer accordingly.<\/p>\n
Strong AI definitions:<\/u><\/p>\n
-In this question and answer sequence, the machine not only simulates human ability but also:<\/p>\n
-understands the story and provides the answers to questions -explains the human ability to understand and answer questions<\/p>\n
Serle says:<\/u><\/p>\n
-The computer\u2019s understanding is zero.<\/p>\n
-If a person is given English instructions to write formal Chinese, but knows no Chinese, does he understand it?<\/p>\n
-This systems reply answers \u201cno\u201d \u2013 he only understands a subset of Chinese through his own English language.<\/p>\n
McCarthy\u2019s View:<\/u><\/p>\n
-Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance.<\/p>\n
-Does not believe in strong AI.<\/p>\n
-The robot has the computer \u2018brain\u2019 system in it.<\/p>\n
-All it is doing is moving around, following instructions and manipulating formal symbols.<\/p>\n
-It simulates the wrong things about the brain -It simulates only the formal structure<\/p>\n
-Combining the three previous replies: states the robot has intentionality.<\/p>\n
-But, if the man controls the robot, then that implies the robot does not have intentionality.<\/p>\n
The Other Minds Reply PYale):<\/u><\/p>\n
-More than just complex behavioral attributions.<\/p>\n
-No purely formal model will ever be sufficient by itself for intentionality because the formal properties are not themselves constitutive of intentionality and have no causal powers<\/p>\n
Could a machine think?<\/u><\/p>\n
-Serle answers yes, because we are machines.<\/p>\n
Could an artifact, a man-made machine think?<\/u><\/p>\n
-If it has the same chemical and physical makeup as humans, then yes.<\/p>\n
Could a digital computer think?<\/u><\/p>\n
-Yes, because we as humans are instantiations of a number of computer programs<\/p>\n
Could something think, understand etc if it is a computer with the right sort of program?<\/u><\/p>\n
-No.<\/p>\n
-Because the formal symbol manipulations by themselves don\u2019t have any intentionality.<\/p>\n
Information processing<\/u><\/p>\n
The computer processes information but not like humans do. It manipulates formal symbols<\/p>\n
-In order to accept the strong AI argument, you would have to be a dualist: the mind has no intrinsic connection with the actual properties of the brain.<\/p>\n
Early computers:<\/u><\/p>\n
-Called electronic brains<\/p>\n","protected":false},"excerpt":{"rendered":"
Weak AI Partificial intelligence) -the principle value of the computer in the study of the mind is that it gives us a very powerful tool -ex: enables us to… Continue Reading Minds, Brains and Programs (John Serle)<\/span><\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[113],"tags":[],"_links":{"self":[{"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/posts\/4466"}],"collection":[{"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/comments?post=4466"}],"version-history":[{"count":0,"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/posts\/4466\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/media?parent=4466"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/categories?post=4466"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.amyork.ca\/academic\/zz\/wp-json\/wp\/v2\/tags?post=4466"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}