Comments on: Overcoming Artificial Stupidity http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/ Stephen Wolfram's Personal Blog Wed, 14 Feb 2018 19:30:19 +0000 hourly 1 http://wordpress.org/?v=3.4.2 By: John Hendrickson http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-1518751 John Hendrickson Tue, 22 Dec 2015 04:46:43 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-1518751 well google it seems to me that is where the problem is (uptake of input into tokens alpha knows of, lack of ability to sectionalize). to pick which data, the idea of "best google hit" (DBM search) google doesnt need to understand you to be smart, it only needs to allow you to choose find a needle in a haystack. a proper library search would not hit only by content, but by all bibliography entries: ie, "word1 AND word2 in TI" (in title, or formula) it looks to me as if WR has already done that ! well google it

seems to me that is where the problem is (uptake of input into tokens alpha knows of, lack of ability to sectionalize). to pick which data, the idea of “best google hit” (DBM search)

google doesnt need to understand you to be smart, it only needs to allow you to choose find a needle in a haystack. a proper library search would not hit only by content, but by all bibliography entries: ie, “word1 AND word2 in TI” (in title, or formula)

it looks to me as if WR has already done that !

]]>
By: Aaron Swartz http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-4137 Aaron Swartz Wed, 16 May 2012 19:47:19 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-4137 Is the 90% figure a result of training the machines or the humans? When WolframAlpha launched it got a lot of attention and people entered questions on a very wide variety of topics into it. Many of them did not e answers and presumably stopped using the site for those queries, but returning frequently for the things they saw the system could answer. This process would result in a growth in the percentage of answered queries even if no additional knowledge had been added. Is the 90% figure a result of training the machines or the humans?

When WolframAlpha launched it got a lot of attention and people entered questions on a very wide variety of topics into it. Many of them did not e answers and presumably stopped using the site for those queries, but returning frequently for the things they saw the system could answer. This process would result in a growth in the percentage of answered queries even if no additional knowledge had been added.

]]>
By: Domain Rider http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3806 Domain Rider Tue, 08 May 2012 13:54:36 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3806 Fremy Company has a good point that the issue is as much a question of knowing too much as too little. WA needs to be able to establish a context for selecting appropriate input interpretations. There are a number of possible approaches, such as learning a user's areas of interest (as does Google, and various social networking sites), but it might be easier initially to provide a context selection field (perhaps as a branching tree of topics of increasing specialization) for the user to point WA in the right direction. Given a contextual field of interest, WA could then rank the input keywords & phrases with regard to their likelihood and so probable meaning in that context. Suitable ranking data could be gleaned by an on-going automated search system, much like the popular search engines, trawling the internet noting the frequency of usage of words and phrases in various fields and assessing the level of specialization of the sites involved (this would be the tricky part). There is probably already a considerable amount of this kind of data collected for other purposes, e.g. search engines, so the technology is available to make this possible. Fremy Company has a good point that the issue is as much a question of knowing too much as too little. WA needs to be able to establish a context for selecting appropriate input interpretations.

There are a number of possible approaches, such as learning a user’s areas of interest (as does Google, and various social networking sites), but it might be easier initially to provide a context selection field (perhaps as a branching tree of topics of increasing specialization) for the user to point WA in the right direction. Given a contextual field of interest, WA could then rank the input keywords & phrases with regard to their likelihood and so probable meaning in that context.

Suitable ranking data could be gleaned by an on-going automated search system, much like the popular search engines, trawling the internet noting the frequency of usage of words and phrases in various fields and assessing the level of specialization of the sites involved (this would be the tricky part). There is probably already a considerable amount of this kind of data collected for other purposes, e.g. search engines, so the technology is available to make this possible.

]]>
By: Mike Beigel http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3420 Mike Beigel Mon, 30 Apr 2012 17:14:34 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3420 Regarding SELF AWARENESS, I am sure you have heard at least SOMETHING about the potential for higher but attainable forms of SELF AWARENESS unrelated to external computation, data gathering or analysis. In particular the METHOD OF GURDJIEFF (sadly distorted and misunderstood by almost everybody who knows the name) is a profound way (I hesitate to say "method" because of the limitations it implies) to be SELF AWARE at a "quantum level" higher than the state of consciousness in which most of us pass our daily lives. For someone with a "big mind" and likely a correspondingly "big" power of attention and concentration, the learning (NOT EASY, but oh so SIMPLE) of this method, experience of first "results" of application, and the ongoing and very demanding EFFORT to sustain this form of "more objective" self-consciousness, can lead to something far beyond the highest aspirations of "mind" -even the most brilliant of minds- in its present condition. It is not a matter of intelligence, per se, but one with intelligence, unquenchable wish, and relentless perserverence will find rewards not measurable in earthly or conventional religious ideation. Hoping you will consider this, if you have not already done so. With very best wishes, Mike Beigel Regarding SELF AWARENESS, I am sure you have heard at least SOMETHING about the potential for higher but attainable forms of SELF AWARENESS unrelated to external computation, data gathering or analysis.
In particular the METHOD OF GURDJIEFF (sadly distorted and misunderstood by almost everybody who knows the name) is a profound way (I hesitate to say “method” because of the limitations it implies) to be SELF AWARE at a “quantum level” higher than the state of consciousness in which most of us pass our daily lives.
For someone with a “big mind” and likely a correspondingly “big” power of attention and concentration, the learning (NOT EASY, but oh so SIMPLE) of this method, experience of first “results” of application, and the ongoing and very demanding EFFORT to sustain this form of “more objective” self-consciousness, can lead to something far beyond the highest aspirations of “mind” -even the most brilliant of minds- in its present condition. It is not a matter of intelligence, per se, but one with intelligence, unquenchable wish, and relentless perserverence will find rewards not measurable in earthly or conventional religious ideation.
Hoping you will consider this, if you have not already done so.
With very best wishes, Mike Beigel

]]>
By: Christopher Haydock http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3082 Christopher Haydock Wed, 18 Apr 2012 21:27:09 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3082 Yes, the success rate for Wolfram|Alpha query responses is remarkable as a somewhat arbitrary 90% numerical milestone and an even more remarkable milestone for the progress of a New Kind of Science (NKS). A scant decade ago NKS computational irreducibility of natural language seemed to explain the discouraging prospects for solving the artificial intelligence problem, while at the same time NKS offered a vague promise that mining the computation universe might find simple rules that could simulate or even process natural language. Today NKS has brought us to the point were we can now dare to speak confidently about an 18 month half-life for banishing artificial stupidity. This rapid progress is illustrated by a brief timeline of Stephen’s public statements about the role of NKS in Wolfram Alpha: 2004. Story of the Making of Wolfram|Alpha, http://www.stephenwolfram.com/publications/recent/50yearspc/ “If we believe the paradigm and the discoveries of NKS, then all this complicated knowledge should somehow have simple rules associated with it.” 2007. Quest for Ultimate Knowledge, Celebrating Gregory Chaitin’s 60th birthday, http://www.stephenwolfram.com/publications/recent/ultimateknowledge/ “If one chooses to restrict oneself to computationally reducible issues, then this provides a constraint that makes it much easier to find a precise interpretation of language. ... I believe we are fairly close to being able to build technology that will [...] take issues in human discourse, and when they are computable, compute them. ... And the consequence of it will be something [...] of quite fundamental importance. That we will finally be able routinely to access what can be computed about our everyday world.” 2009. First killer NKS app, http://blog.wolfram.com/2009/05/14/7-years-of-nksand-its-first-killer-app/ “Wolfram|Alpha is [...] still prosaic relative to the full power of the ideas in NKS. ... It is the very ubiquity of computational irreducibility that forces there to be only small islands of computational reducibility—which can readily be identified even from quite vague linguistic input. ... For now, for the first time, anyone will be able to walk up to a computer and immediately see just how diverse a range of possible computations it can do.” 2011. Computing and Philosophy, http://blog.stephenwolfram.com/2011/05/talking-about-computing-and-philosophy/ “[The NKS principle of computational equivalence implies] there is no bright line that identifies “intelligence”; it is all just computation. ... That’s the philosophical underpinning that makes possible the idea that building a Wolfram Alpha isn’t completely crazy. Because if one had to build the whole artificial intelligence one knows that one is a long way from doing that. But in fact it turns out that there’s a more direct route that just uses the pure idea of computation.” 2012. Overcoming Artificial Stupidity, http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/ “One might have thought that doing better at understanding natural language would be about covering a broader range of more grammar-like forms. ... But our experience with Wolfram|Alpha is that it is at least as important to add to the knowledgebase of the system. ... As the domains of Wolfram|Alpha knowledge expand, they gradually fill out all the areas that we humans consider common sense, pushing out absurd ‘artificially stupid’ interpretations.” Yes, the success rate for Wolfram|Alpha query responses is remarkable as a somewhat arbitrary 90% numerical milestone and an even more remarkable milestone for the progress of a New Kind of Science (NKS). A scant decade ago NKS computational irreducibility of natural language seemed to explain the discouraging prospects for solving the artificial intelligence problem, while at the same time NKS offered a vague promise that mining the computation universe might find simple rules that could simulate or even process natural language. Today NKS has brought us to the point were we can now dare to speak confidently about an 18 month half-life for banishing artificial stupidity. This rapid progress is illustrated by a brief timeline of Stephen’s public statements about the role of NKS in Wolfram Alpha:

2004. Story of the Making of Wolfram|Alpha, http://www.stephenwolfram.com/publications/recent/50yearspc/
“If we believe the paradigm and the discoveries of NKS, then all this complicated knowledge should somehow have simple rules associated with it.”

2007. Quest for Ultimate Knowledge, Celebrating Gregory Chaitin’s 60th birthday, http://www.stephenwolfram.com/publications/recent/ultimateknowledge/
“If one chooses to restrict oneself to computationally reducible issues, then this provides a constraint that makes it much easier to find a precise interpretation of language. … I believe we are fairly close to being able to build technology that will [...] take issues in human discourse, and when they are computable, compute them. … And the consequence of it will be something [...] of quite fundamental importance. That we will finally be able routinely to access what can be computed about our everyday world.”

2009. First killer NKS app, http://blog.wolfram.com/2009/05/14/7-years-of-nksand-its-first-killer-app/
“Wolfram|Alpha is [...] still prosaic relative to the full power of the ideas in NKS. … It is the very ubiquity of computational irreducibility that forces there to be only small islands of computational reducibility—which can readily be identified even from quite vague linguistic input. … For now, for the first time, anyone will be able to walk up to a computer and immediately see just how diverse a range of possible computations it can do.”

2011. Computing and Philosophy, http://blog.stephenwolfram.com/2011/05/talking-about-computing-and-philosophy/
“[The NKS principle of computational equivalence implies] there is no bright line that identifies “intelligence”; it is all just computation. … That’s the philosophical underpinning that makes possible the idea that building a Wolfram Alpha isn’t completely crazy. Because if one had to build the whole artificial intelligence one knows that one is a long way from doing that. But in fact it turns out that there’s a more direct route that just uses the pure idea of computation.”

2012. Overcoming Artificial Stupidity, http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/
“One might have thought that doing better at understanding natural language would be about covering a broader range of more grammar-like forms. … But our experience with Wolfram|Alpha is that it is at least as important to add to the knowledgebase of the system. … As the domains of Wolfram|Alpha knowledge expand, they gradually fill out all the areas that we humans consider common sense, pushing out absurd ‘artificially stupid’ interpretations.”

]]>
By: Ben Carbery http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3067 Ben Carbery Wed, 18 Apr 2012 03:40:23 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3067 Here's a behaviour I've been wondering about for a while that you seem to touch on in this article. Say I ask Alpha "number of words in the english language". Alpha is able to give a perfectly good answer - it's clear that it understands the question and it has the knowledge to answer it. But if I try ostensibly the same question with a different language, say german, Alpha appears flummoxed. This is interesting because it seems to the user that Alpha hasn't understood the question, whereas because I asked the first question I can infer that the real problem is it doesn't know enough about German. I wonder if providing feedback to the user about to what extent the input has been understood is something that you would consider important to include in a knowledge engine, as it seems to be important in human to human communication. For example could Alpha ask an intelligent question to clarify what the user meant in certain circumstances? Also I wonder if this behaviour is just an artifact of Alpha trying a bit too hard to give some kind of answer or is it's interpretation of the language semantics inextricably linked to what knowledge it has? Here’s a behaviour I’ve been wondering about for a while that you seem to touch on in this article.

Say I ask Alpha “number of words in the english language”. Alpha is able to give a perfectly good answer – it’s clear that it understands the question and it has the knowledge to answer it. But if I try ostensibly the same question with a different language, say german, Alpha appears flummoxed.

This is interesting because it seems to the user that Alpha hasn’t understood the question, whereas because I asked the first question I can infer that the real problem is it doesn’t know enough about German. I wonder if providing feedback to the user about to what extent the input has been understood is something that you would consider important to include in a knowledge engine, as it seems to be important in human to human communication. For example could Alpha ask an intelligent question to clarify what the user meant in certain circumstances?

Also I wonder if this behaviour is just an artifact of Alpha trying a bit too hard to give some kind of answer or is it’s interpretation of the language semantics inextricably linked to what knowledge it has?

]]>
By: FremyCompany http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3056 FremyCompany Tue, 17 Apr 2012 19:36:21 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3056 More about the "knownability" problem: the fact someone may know about something is evolving rapidly. For exemple, imagine only 10 people knew about the Plum crater today but, that in a month, we discover some primitive form of extraterest life in that crater. In one day, the news would spread and amost anybody could ask questions about it. This is a known problem for search engines, and I wonder if W|A couldn't find an interest in working toegether with a search engine (Bing+Powerset or Google+Freebase) to learn about "how many people search a certain keyword" and, if possible, dissambiguations used for those keywords. This thing evolves at a rapid pace and maybe W|A is simply too small to know about thoses. It may also help to answer questions on subjects WA simply don't know about. Maybe creating profiles would help, too. Not everybody is interested in astronomy, but maybe people doing astronomy are more likely to know about the Plum crater. More about the “knownability” problem: the fact someone may know about something is evolving rapidly. For exemple, imagine only 10 people knew about the Plum crater today but, that in a month, we discover some primitive form of extraterest life in that crater. In one day, the news would spread and amost anybody could ask questions about it.

This is a known problem for search engines, and I wonder if W|A couldn’t find an interest in working toegether with a search engine (Bing+Powerset or Google+Freebase) to learn about “how many people search a certain keyword” and, if possible, dissambiguations used for those keywords. This thing evolves at a rapid pace and maybe W|A is simply too small to know about thoses. It may also help to answer questions on subjects WA simply don’t know about.

Maybe creating profiles would help, too. Not everybody is interested in astronomy, but maybe people doing astronomy are more likely to know about the Plum crater.

]]>
By: George Gabriel http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3055 George Gabriel Tue, 17 Apr 2012 19:23:43 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3055 +++++ For many groups of people (artificial stupidity) -- is a useful means to achieve control over people -- So no care and no shame is felt by spreading nonsense -- those who are good in using language, like in the Arab world are employing that technique profoundly -- through creating circular arguments -- that exhibits very complex logic structures -- that are able to turn the group who has the right thing - on their side --into a group to be blamed and mocked of -- the best ways to monitor that - requires educated analysts- who know the used language and analyse the facebook posts and responses -- on a grand scale -- [ Culturomics ] I know one professor http://www.nics.tennessee.edu/leetaru is making use of indexing to analyse all media sources coming from eg. Arab spring countries -- and is using supercomputers to extract conclusions -- That is good -- but more humans are needed to help as well -- one can not make every thing work automatically I can expand taking about that in length -- if you desired -- I appreciate your works - Thanks -- +++++
For many groups of people (artificial stupidity) –
is a useful means to achieve control over people –

So no care and no shame is felt by spreading nonsense –

those who are good in using language, like in the Arab world
are employing that technique profoundly –

through creating circular arguments —
that exhibits very complex logic structures –

that are able to turn the group who has the right thing -
on their side –into a group to be blamed and mocked of –

the best ways to monitor that – requires educated analysts-
who know the used language and analyse the facebook
posts and responses — on a grand scale –

[ Culturomics ]
I know one professor
http://www.nics.tennessee.edu/leetaru

is making use of indexing to analyse all media sources
coming from eg. Arab spring countries –
and is using supercomputers to extract conclusions –

That is good — but more humans are needed to help as well –
one can not make every thing work automatically

I can expand taking about that in length — if you desired –
I appreciate your works – Thanks –

]]>
By: FremyCompany http://blog.stephenwolfram.com/2012/04/overcoming-artificial-stupidity/comment-page-1/#comment-3053 FremyCompany Tue, 17 Apr 2012 19:06:39 +0000 http://blog.internal.stephenwolfram.com/?p=2952#comment-3053 I may have overlooked the problem, but to me it seems to me that the issue is as much a question of knowing too few AND knowing too much. You outline in the whole article the fact that W|A doesn't know what a plum is, doesn't know what a guinea pig, etc... While this is true, to me there's another problem: it knows too many things nobody else know. I didn't know about the Polar BEAR project. I didn't know about the Plum crater. More important than knowing everything about the things the person who wrote the query knows is, I think, knowing all the things he'll never ask you a question about, because he's unlikely to know about the subject. If my brother ask me a question about CKY, I'm not going to explain everything I know about the algorithm because I don't expect him to know about that. I may just ask if it's possible he speak about that ir say I don't know anything about that if he replies no. I may have overlooked the problem, but to me it seems to me that the issue is as much a question of knowing too few AND knowing too much.

You outline in the whole article the fact that W|A doesn’t know what a plum is, doesn’t know what a guinea pig, etc… While this is true, to me there’s another problem: it knows too many things nobody else know. I didn’t know about the Polar BEAR project. I didn’t know about the Plum crater.

More important than knowing everything about the things the person who wrote the query knows is, I think, knowing all the things he’ll never ask you a question about, because he’s unlikely to know about the subject.

If my brother ask me a question about CKY, I’m not going to explain everything I know about the algorithm because I don’t expect him to know about that. I may just ask if it’s possible he speak about that ir say I don’t know anything about that if he replies no.

]]>