Loading AI tools
From Wikipedia, the free encyclopedia
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 5 | ← | Archive 7 | Archive 8 | Archive 9 | Archive 10 | Archive 11 | → | Archive 14 |
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
I propose here that the phrase "human-like" be included in the article lead only as a part of the broad idea of "whether human-like or not." In particular, I propose that the opening sentences of the article lead should read, "Artificial intelligence (AI) is the intelligence exhibited by machines or software. The academic field of AI studies the goal of creating intelligence, whether in emulating human-like intelligence or not." — Cheers, Steelpillow (Talk) 10:13, 22 October 2014 (UTC)
The inclusion of "human-like" in the lead has caused much contention and resulting confusion. Like may words and phrases, its precise interpretation depends upon the context in which it is used. This proposal uses it in a linguistically descriptive rather than academically definitive way, and as such its usage should not need to be cited. Any subsequent use in the article of the term "human-like", or of a similar-meaning qualifier, in a more specific context would need to be be cited. — Cheers, Steelpillow (Talk) 10:15, 22 October 2014 (UTC)
The following discussion is in reply to FelixRosch's Preliminary Comment, above. ---- CharlesGillingham (talk) 19:12, 29 October 2014 (UTC)
I'm tempted to go away and leave you all to play Kilkenny cats in a hopefully convergent series, but there are a couple of items that, if they have been recognised in the foregoing, I have missed and I refuse to dig through it to find them.
Hi @Steelpillow. I agree with you practically in detail, even to the point of including the human-like aspect as an important topic. Where the wheels come off is that the human-like aspect (which very likely could earn its own article for a range of reasons, some of industrial/social importance and some of academic/philosophic importance) is not of fundamental, but of contingent importance. You could easily have a huge field of study and endeavour of and in AI, without even mentioning human-like AI. There is no reason to mention human-like AI except in context of particular lines of work. There even is room to argue about the nature of "human-like". Is Eliza human-like? You know and I know, but she fooled a lot of folks who refused to believe there wasn't someone at the other end of the terminal. The fact that there is a lot of work in that direction, and a lot of interest in it doesn't imply that it needs discussion where the basic concepts of the field are being introduced and discussed. Consider an analogy; suppose we had an article on automotive engineering in the 21st century, and one of the opening sentences read: "It is an academic field of study which studies the goal of creating mechanical means of transport, whether in emulating horse-like locomotion or not." Up to the final comma no one is likely to have much difficulty, but after that things go wrong don't they? Even if we all agree that there is technical merit to studying horse-like locomotion, that is not the place, nor the context to mention it. Even though we can argue that horse-like locomotion had been among the most important for millennia, even though less than 150 years ago people spoke of "horseless carriages" because the concept of horses was so inseparable from that of locomotion, even though we still have a lot to learn before we could make a real robot that can rival certain aspects of a horse's locomotory merits, horse-like locomotion is not the first thing we mention in such a context. I could make just as good a case for spider-like AI as for human, but again I do not say: "It is an academic field of study which studies the goal of creating intelligence, whether in emulating spider-like intelligence or not." Is there room in the article for mentioning such things at all? Very possibly. In their place and context certainly. Not in the lede though. And possibly not in the article at all; it might go better into a related article. Universal importance is not the same as universal relevance. The way to structure the article is not by asking what the most important things are, but in asking how to articulate the topic, and though there are many ways in which it could be done, that is not one of the good ways! JonRichfield (talk) 19:56, 2 November 2014 (UTC)
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Should the phrase "human-like" be included in the first paragraph of the lede of this article as describing the purpose of the study of artificial intelligence? Robert McClenon (talk) 14:43, 2 October 2014 (UTC)
It is agreed that some artificial intelligence research, sometimes known as strong AI, does involve human-like intelligence, and some artificial intelligence research, sometimes known as weak AI, involves other types of intelligence, and these are mentioned in the body of the article. This survey has to do with what should be in the first paragraph. Robert McClenon (talk) 14:43, 2 October 2014 (UTC)
Discussion of previous RfC format |
---|
(Deleting my own edit which was intentionally distorted by RfC editor User:RobertM by re-titling its section and submerging it into the section for his own personal gain of pressing his bias for the "Weak-AI" position in this poorly formulated RfC.) FelixRosch (talk) 17:22, 6 October 2014 (UTC)
I was invited here randomly by a bot. (Though it also happens I have an academic AI background.) This RFC is flawed. Please read the RFC policy page before proceeding with this series of poorly framed requests. It makes no sense to me to have a section for including the term and separate section for excluding the term (should everyone enter an oppose and a support in each section?). The question should be something simple and straight forward like "Should "human-like" be included in the lead paragraph to define the topic." Then there should be a survey section where respondents can support or oppose the inclusion and a discussion section for stuff like this rant. Please read the policy page before digging this hole any deeper. Jojalozzo 22:35, 4 October 2014 (UTC)
This is the most confusingly formatted RFC I've ever seen that wasn't immediately thrown out as gibberish, however, it doesn't look like anybody is arguing that the topic should be described as "human-like" in the lead? I'd expect to see at least one. Have the concerned parties been notified that this is ongoing? APL (talk) 06:17, 8 October 2014 (UTC)
|
I am getting more unhappy with that phrase "human-like". What does it signify? The lead says, "This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence," which to me presupposes human-like consciousness. OTOH here it is defined as: "The ability for machines to understand what they learn in one domain in such a way that they can apply that learning in any other domain." This makes no assumption of consciousness, it merely defines human-like behaviour. One of the citations in the article says, "Strong AI is defined ... by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Besides begging the question as to what "simulating thinking" might be, this appears to raise the question as to whether strong vs weak is really the same distinction as human-like vs nonhuman. Like everybody else, AI researchers between tham have all kinds of ideas about the nature of consciousness. I'll bet that many think that "simulating thinking" is an oxymoron, while as many others see it as a crucial issue. In other words, there is a profound difference between the scientific study and creation of AI behaviour vs. the philosophical issue as to its inner experience - a distinction long acknowleged in the study of the human mind. Which of these aspects does the phrase "human-like" refer to? One's view of oneself in this matter will strongly inform one's view of AI in like manner. I would suggest that it can refer to either according to one's personal beliefs, and rational debate can only allow the various camps to beg to differ. The phrase is therefore best either avoided in the lead or at least set in an agreed context. Sorry to have rambled on so. — Cheers, Steelpillow (Talk) 18:21, 6 October 2014 (UTC)
I'm unhappy with it for another reason. "Artificial Intelligence" in computer science I think is now a technical term that is applied to a wide range of things. When someone writes a program to enable self driving cars - they call it artificial intelligence. See Self Driving Car: An Artificial Intelligence Approach "Artificial Intelligence also known as (AI) is the capability of a machine to function as if the machine has the capability to think like a human. In automotive industry, AI plays an important role in developing vehicle AI plays an important important role in developing developing vehicle vehicle technology." For a machine to function as if it had the capability to think like a human - that's very different from actually emulating human-like intelligence. Deep Blue was able to do that also - to chess onlookers - it acted as if it had the capability to think like a human at least in the limited realm of a chess game. In the case of the self driving car, or Deep Blue - you are not at all aiming to pass the Turing test or make a machine that is intelligent in the way a human is. Indeed, the goals to make a chess playing computer or a self driving car are compatible with a belief that human intelligence can't be programmed.
I actually think that way myself, persuaded by Roger Penrose's arguments - I think myself that no programmed computer will ever be able to understand mathematics in the way a mathematician does. Can never truly understand what is meant by "this statement is true" - just feign an understanding of truth, continually corrected by its programmers when it makes mistakes. His argument also extends to quantum computers and hardware neural nets. He doesn't think that hardware neural nets capture what the brain does, but that there is a lot going on within the cells which we don't know about that are also relevant as well as other forms of communications between cells.
But still, I accept that in tasks of limited scope such as chess playing or driving cars, they can come to out perform humans. This is nothing to do with weak AI or strong AI as I think both are impossible myself. Except perhaps with biological machines (slime moulds) or computers that in some way can do something essentially non computable (recognize mathematical truth) - if so they have to go beyond ordinary programming and go beyond ordinary quantum computers also to some new thing.
So - I know that's controversial - not trying to persuade you of my views - but philosophically it's a view that some people take, including Roger Penrose. And is a consistent view to have. Saying that the aim of AI is to create human like intelligence - that's making a philosophical statement that the things programmers are trying to achieve with self driving cars and with chess playing computers are on a continuum with human intelligence and we just need more of the same. But not everyone sees it that way. I think AI is very valuable, but not in that way, not the direction it is following at present anyway.
Plus also - the engineers of Google's self driving cars - are surely not involved in a "goal of emulating human-like intelligence" except in a very limited way.
Rather, their main aim is to create machines that are able to take over from humans in a flexible human environment without causing problems - and to do that by emulating human intelligence to whatever extent is necessary and useful to do that work.
Also in another sense, emulating human intelligence is too limited a goal. In the case of Deep Blue the aim was to be better at chess than any human - not to just emulate humans. Ideally also the Google self driving cars will be better at driving than humans. The aim is to create machines that in their own limited frame of reference are better than humans - and using designs inspired by capabilities of humans and drawn from things that humans can do. But not at all to emulate humans including all their limitations and faults and mistakes and accidents. I think myself very few AI researchers have that as a goal. So not sure how the lede should be written, but I am not at all happy with "goal of emulating human intelligence" - that is wrong in so many ways - except for some science fiction stories. I suggest also that we say "In Computer science" to start the second sentence, whatever it is, to distinguish it from "In fiction" where the goal often is to emulate human intelligence to a high degree as with Asimov's positronic robots. Robert Walker (talk) 00:28, 16 October 2014 (UTC)
One thing I did notice is that about half the page content is references and additional links of one kind or another - the article itself barely makes half way through the page. This seems absurd to me and I would suggest a severe rationalisation down to a few key secondary or tertiary works, with other sources (especially primary research) cited only where it is essential. — Cheers, Steelpillow (Talk) 20:32, 19 November 2014 (UTC)
I have started to tidy up the remaining errors reported by User:Ucucha/HarvErrors.js. These are for citations which specify a |ref=
parameter with no corresponding {{harvnb}}
or whatever reference.
I will continue to tweak for source consistency as I go, with the parameter order for citations roughly: last etc, year/date, title, publisher etc, isbn etc, url, ref. Having last, first, date in that order helps when matching citation to callout and having last first (!) helps when checking the citation sorting. --Mirokado (talk) 16:42, 6 December 2014 (UTC)
Removed:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.