Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
 All Forums
 Our Skeptic Forums
 General Skepticism
 Is the Technological Singularity coming?
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Previous Page | Next Page
Author Previous Topic Topic Next Topic
Page: of 5

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 09/15/2008 :  22:02:35   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
Artificial Intelligence is simply the application of artificial or non-naturally occurring systems that use the knowledge-level to achieve goals.


This does not fit with most of the things you listed as "AI" above. Of course, "knowledge-level" is a rather meaningless term until rigorously defined.

And I haven't been saying there hasn't been significant steps in AI, nor am I saying that it isn't interesting, nor am I saying that the field isn't promising. But if you're going to point at something like ALICE and expect me to be impressed, don't hold your breath.

So far.....that will change, and I think it will change sooner than most realize. You seem to be putting brains in some sort of "special" category. We will develop AI systems that will blow our punny brains out of the water....in every way possible.

Our brains are extremely intricate...but certainly not impossible to duplicate and then surpass using non biological technologies.


My issue is not with what will happen, but when. And you can continually repeat that you think it will happen soon over and over again, but I'd like to know why you think that.


You think Kurzweil has underestimated what it takes to match a human brain....why?


Because he is making a prediction on something that we do not yet understand. The field of AI was looking extremely promising back in the 60's and 70's, and time and time again they it failed to deliver. I have not seen any convincing argument more than "we have better processing power!" to make me think all that much has changed.

I think he has it pegged and his chess prediction is one (one of many) indicator that he has a grasp on the computing power of the brain.


Chess programs (at least to my knowledge) almost always use some derivative of a min-max algorithm. The result is that the accuracy of the program depends upon how many levels it can traverse in the decision tree. That is to say, how good a program is relies almost solely (not quite entirely though) upon the speed of computation. Such a prediction is nothing more than a fancy wrapper to Moore's Law.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Edited by - Ricky on 09/15/2008 22:11:09
Go to Top of Page

Dude
SFN Die Hard

USA
6891 Posts

Posted - 09/15/2008 :  22:26:26   [Permalink]  Show Profile Send Dude a Private Message  Reply with Quote
You think Kurzweil has underestimated what it takes to match a human brain....why? I think he has it pegged and his chess prediction is one (one of many) indicator that he has a grasp on the computing power of the brain.

The chess prediction means he knows what he is talking about when it comes to how much computing power is needed for a specific task.

While I think he is credible, I take his AI prediction with a grain of salt. We have a reasonable approximation of how much processing power the human brain has, and if Moore's Theorem(Law) holds for 10-15 more years we will be able to match that in hardware.

I'm not sure that pure processing power will automatically result in a Turing level AI though.

IF I had to bet money... I'd say the first intelligence to surpass human standards will be a human/tech hybrid (a cyborg if you will). By integrating the processing power we will have in 15+ years with a human mind. The biology has the base code for all the sensory input and processing already in place, if you start with just technology you have to design that system from the ground up. A daunting task at this point.


Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong.
-- Thomas Jefferson

"god :: the last refuge of a man with no answers and no argument." - G. Carlin

Hope, n.
The handmaiden of desperation; the opiate of despair; the illegible signpost on the road to perdition. ~~ da filth
Go to Top of Page

astropin
SFN Regular

USA
970 Posts

Posted - 09/15/2008 :  23:03:38   [Permalink]  Show Profile Send astropin a Private Message  Reply with Quote
Originally posted by Ricky

Chess programs (at least to my knowledge) almost always use some derivative of a min-max algorithm. The result is that the accuracy of the program depends upon how many levels it can traverse in the decision tree. That is to say, how good a program is relies almost solely (not quite entirely though) upon the speed of computation. Such a prediction is nothing more than a fancy wrapper to Moore's Law.


I agree with everything you are saying here. But it does not address the point I was trying to make. He (Kurzweil) estimated the computing/brain power a chess grand master utilizes to play the game at that level. He then accurately predicted (based solely on his theory of exponential increases in computing power) when a computer should be able to defeat the raining champion. Deep Blue won its match roughly 1 year earlier than Kurzweils prediction.....which he had made 8 years earlier. The current estimate of the total computing power of the brain is 100 million MIPS (Million Instructions Per Second)...and the equivalent of 100 million megabytes of storage. These are the rough estimates that Kurzweil is using in his prediction. Could this be off...sure....but so what. Even if it's way off his "law of accelerating returns" would catch up with it VERY quickly...pushing his predictions back only a few years or possibly a decade at most.

I would rather face a cold reality than delude myself with comforting fantasies.

You are free to believe what you want to believe and I am free to ridicule you for it.

Atheism:
The result of an unbiased and rational search for the truth.

Infinitus est numerus stultorum
Go to Top of Page

Dave W.
Info Junkie

USA
26021 Posts

Posted - 09/15/2008 :  23:28:04   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message  Reply with Quote
Originally posted by Ricky

No, I think the point you've just made is that "understand" does not depend on whether or not one answers a question, but rather how one goes about getting such an answer.
As soon as you can tell us the method through which a human brain arrives at an answer to a problem (any problem), then we'll all know how it differs from Searle's "Chinese Room."

Really, how "understanding" works, biologically, is so far unknown. One neuron doesn't understand anything, but lots of them working together do. Where is the line drawn between simple stimulus/response activity (a single neuron) and understanding?

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

Zeked
Skeptic Friend

USA
90 Posts

Posted - 09/16/2008 :  01:04:39   [Permalink]  Show Profile Send Zeked a Private Message  Reply with Quote
Ricky

What are often seen as boundaries and absolute exclusions, often have more to do with the personal definitions rather than common language use. While there can be argument over an adaptive heuristic database query system being true AI, the resulting exclusion of so many texts on the subject should be reason enough one should make an allowance of the terms use.

A more practical definition that has been used for AI is, attempting to build artificial systems that will perform better on tasks that humans are currently better at. Tasks like real number division are not AI because computers easily do that task better than humans. Visual perception is AI since it has proved very difficult to get computers to perform even basic tasks. So obviously this definition changes over time, but it does capture the essential nature of AI questions.

Mainstream thinking in psychology regards human intelligence not as a single ability or cognitive process but rather as an array of separate components. Research in AI has focussed mainly on the components of intelligence: learning, reasoning, problem-solving, perception, and language-understanding.

But as Dave questions "understanding" - there is no universally agreed answer to the difficult question of "what is understanding". According to one theory, whether or not one understands depends not only upon one's behaviour but also upon one's history.

AIML (ALICE) is a novelty, it has no true cognitive power as of yet. It requires extensive preconditioning with the use of database entries and has little predictive capability. It uses XML, so it could add these functions over time, but it requires far too much user interaction to efficiently train. The same can be said of humans, but the approach in AIML is currently filled with limitations and pitfalls.

Complex behaviour can still be considered unintelligent - http://www.imagination-engines.com/mind2.htm Applied to a six legged robot with visual feedback, the robot learns to interact with its environment. It learns to walk, run and climb without the large database of preconditioning required by typical closed loop motor control systems. It does this by trial and error, simulating our own neural networks and displaying cognitive abilities - it learns from its' mistakes and tries to predict outcomes - the accuracy in prediction and function improves without user interaction.

Yes, you can give a self training artificial neural network a problem and get an answer they were not programed to complete. You can also have the analysis of results completed by other neural networks to test for validity. Simulating our own cognitive abilities requires massive networks, and this too is already being done. Holographic neural nets are another approach.

We can continue to argue what is intelligence until the day comes and goes that we have been surpassed in cognitive abilities by artificial constructs, that then argue we are the ones that are not intelligent.
Go to Top of Page

Siberia
SFN Addict

Brazil
2322 Posts

Posted - 09/16/2008 :  05:58:19   [Permalink]  Show Profile  Visit Siberia's Homepage  Send Siberia an AOL message  Send Siberia a Yahoo! Message Send Siberia a Private Message  Reply with Quote
Originally posted by Dave W.

Originally posted by Ricky

If you ask that black box a question that you didn't specifically program it for, it won't be able to answer you.
I have no idea what's inside the black box. If it happens to be a first-semester calculus student, and you feed it a much more advanced problem, then of course it won't answer correctly.

But what if inside is an English major with millions of pages of instructions on symbol manipulation (never mentioning the word "calculus") written by a PhD calculus professor? Even if the correct answers come out of the box, the English major still doesn't understand calculus.

The point, of course, is that what "understanding" means isn't really clear.

Point taken.

But then, consider both people at the same level, locked in the same black box, with the same amount of information available to them. I'd hope that the first-semester calculus student would be able to give an incorrect, but in level with his available information, answer, whereas the English major wouldn't give any answer at all.

But, as you point out in later posts, we don't even know what understanding is. I can't argue machines can't understand if I don't know what understanding is. You're correct in pointing that out.

"Why are you afraid of something you're not even sure exists?"
- The Kovenant, Via Negativa

"People who don't like their beliefs being laughed at shouldn't have such funny beliefs."
-- unknown
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 09/16/2008 :  06:14:47   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
Originally posted by Zeked
A more practical definition that has been used for AI is, attempting to build artificial systems that will perform better on tasks that humans are currently better at. Tasks like real number division are not AI because computers easily do that task better than humans. Visual perception is AI since it has proved very difficult to get computers to perform even basic tasks. So obviously this definition changes over time, but it does capture the essential nature of AI questions.


This is not a bad definition for AI, but I fail to see how a GUI, Google, or CAD fits into this definition. This is the 2nd time I'm asking you directly, 3rd if you count my last post where I merely pointed this out. But if you continue having a one way conversation, consider this my last reply.

However, the article that we are going off of talks about AI in more of a Turing test sort of way. It talks about intelligence (from as far as I can tell, in the human sense) rather than tasks, and that is what is currently at hand.

Neural networks are great. They have been used to train a computer to drive an actual van. By far I would conclude they are the most promising area of research. But there are also problems. Neural networks rely on their feedback loop. Constructing feedback loops for doing tasks is comparatively simple than doing so for one required to reach the level of intelligence suggested by Kurzweil. Again, there is this gap where we don't know how to do it, we don't know if we can do it, and because of this, to suggest that it is just a matter of time is a bit absurd.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Edited by - Ricky on 09/16/2008 06:17:44
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 09/16/2008 :  06:16:46   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
Originally posted by Dave W.
Really, how "understanding" works, biologically, is so far unknown. One neuron doesn't understand anything, but lots of them working together do. Where is the line drawn between simple stimulus/response activity (a single neuron) and understanding?


Agreed, but we can state the black-and-white cases. An English major using pages of notes doing symbol manipulation and a computer doing the same are both examples of not understanding.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 09/16/2008 :  06:21:28   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message  Reply with Quote
Originally posted by astropin
The current estimate of the total computing power of the brain is 100 million MIPS (Million Instructions Per Second)...and the equivalent of 100 million megabytes of storage. These are the rough estimates that Kurzweil is using in his prediction. Could this be off...sure....but so what. Even if it's way off his "law of accelerating returns" would catch up with it VERY quickly...pushing his predictions back only a few years or possibly a decade at most.


And what I've been trying to say over and over again is that AI research has not solely been pushed back because of a lack of sufficient hardware. Indeed, there has been, but there are other problems as well. If we were given a machine that can compute 20 orders of magnitude faster tomorrow, we would still not have AI.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Edited by - Ricky on 09/16/2008 06:22:14
Go to Top of Page

Dave W.
Info Junkie

USA
26021 Posts

Posted - 09/16/2008 :  07:35:11   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message  Reply with Quote
Originally posted by Ricky

Agreed, but we can state the black-and-white cases. An English major using pages of notes doing symbol manipulation and a computer doing the same are both examples of not understanding.
How can you state that without knowing how the brain itself works?

The point to the English-major example is that from outside the black box, we cannot tell the difference between the English major and his instruction sheets on the one hand, or the black box containing an actual calculus professor on the other. The outward behavior is identical, so how does one distinguish between "understanding" and "not understanding" without cracking open the box?

Again, a single neuron is an example of not understanding, as well. "Understanding" is clearly an emergent property of a big group of neurons. Why can't it also emerge from the English-major-plus-instructions system, or the boatload-of-transistors-programmed-to-act-like-neurons system?

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

astropin
SFN Regular

USA
970 Posts

Posted - 09/16/2008 :  08:23:42   [Permalink]  Show Profile Send astropin a Private Message  Reply with Quote
Originally posted by Dave W.

Again, a single neuron is an example of not understanding, as well. "Understanding" is clearly an emergent property of a big group of neurons. Why can't it also emerge from the English-major-plus-instructions system, or the boatload-of-transistors-programmed-to-act-like-neurons system?


Exactly! We know that a neuron has no understanding....but a massive group of them working in parallel do.....or can.

Ricky you are correct that just getting a computer that can perform 20 orders of magnitude faster than a human brain will not necessarily equal any sort of intelligence. But our understanding of how the human brain works is also going to increase....very rapidly. It won't be that long before we can make very high resolution scans of the brain in real time. Combining this information along with the progress we make in nanotech and computing power should/could result in an emergent AI within the time frames Kurzweil describes. We won't require complete knowledge of how the brain "understands" in order to duplicate it.....all we (should)need is a good map.

I would rather face a cold reality than delude myself with comforting fantasies.

You are free to believe what you want to believe and I am free to ridicule you for it.

Atheism:
The result of an unbiased and rational search for the truth.

Infinitus est numerus stultorum
Go to Top of Page

Dude
SFN Die Hard

USA
6891 Posts

Posted - 09/16/2008 :  09:39:58   [Permalink]  Show Profile Send Dude a Private Message  Reply with Quote
there is another point to consider. The human brain may not be the only thing that can achieve cognition. We may be able to get the same effect by a different method. Like Dave_W's black box example.



Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong.
-- Thomas Jefferson

"god :: the last refuge of a man with no answers and no argument." - G. Carlin

Hope, n.
The handmaiden of desperation; the opiate of despair; the illegible signpost on the road to perdition. ~~ da filth
Go to Top of Page

astropin
SFN Regular

USA
970 Posts

Posted - 09/16/2008 :  10:11:18   [Permalink]  Show Profile Send astropin a Private Message  Reply with Quote
Originally posted by Dude

there is another point to consider. The human brain may not be the only thing that can achieve cognition. We may be able to get the same effect by a different method. Like Dave_W's black box example.





Very true as well. I think there are probably MANY paths to intelligence. In fact I would imagine that "other avenues" would likely result in a superior design. After all evolution "works" but is normally filled with design flaws and unnecessary handicaps. I think our best "first crack" at it will be attempting to mimic a human brain though. After human level AI is achieved I'm sure it wont be long before we have advanced AI systems designing the next generation of AI.....and so on.

I would rather face a cold reality than delude myself with comforting fantasies.

You are free to believe what you want to believe and I am free to ridicule you for it.

Atheism:
The result of an unbiased and rational search for the truth.

Infinitus est numerus stultorum
Go to Top of Page

Dr. Mabuse
Septic Fiend

Sweden
9687 Posts

Posted - 09/16/2008 :  11:05:48   [Permalink]  Show Profile  Send Dr. Mabuse an ICQ Message Send Dr. Mabuse a Private Message  Reply with Quote
Originally posted by Zeked
I get the impression I am in a minority, on protecting personal rights over the benefit of the majority.

The sacrifice of individual members for the benefit of the majority has shown to be a survival trait in the evolution in many species.

The ethical dilemma arises when one tries to apply it on humans.


Dr. Mabuse - "When the going gets tough, the tough get Duct-tape..."
Dr. Mabuse whisper.mp3

"Equivocation is not just a job, for a creationist it's a way of life..." Dr. Mabuse

Support American Troops in Iraq:
Send them unarmed civilians for target practice..
Collateralmurder.
Go to Top of Page

Dude
SFN Die Hard

USA
6891 Posts

Posted - 09/16/2008 :  11:27:20   [Permalink]  Show Profile Send Dude a Private Message  Reply with Quote
Astropin said:
After human level AI is achieved I'm sure it wont be long before we have advanced AI systems designing the next generation of AI.....and so on.

That is the concept of the "singularity". If a real AI is created, it should shortly there after be able to redesign itself or new AIs that are "better". If the accelerating returns hypothesis holds, then the resultant AIs will improve exponentially.

Of course, we humans don't even have a good definition for intelligence, so we'll be stuck measuring "better" by how much data an AI can process in a given time, or something like that.


Ignorance is preferable to error; and he is less remote from the truth who believes nothing, than he who believes what is wrong.
-- Thomas Jefferson

"god :: the last refuge of a man with no answers and no argument." - G. Carlin

Hope, n.
The handmaiden of desperation; the opiate of despair; the illegible signpost on the road to perdition. ~~ da filth
Go to Top of Page
Page: of 5 Previous Topic Topic Next Topic  
Previous Page | Next Page
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Jump To:

The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.19 seconds.
Powered by @tomic Studio
Snitz Forums 2000