Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
 All Forums
 Our Skeptic Forums
 Astronomy
 Cool on many levels
 New Topic  Topic Locked
 Printer Friendly Bookmark this Topic BookMark Topic
Previous Page | Next Page
Author Previous Topic Topic Next Topic
Page: of 4

Dave W.
Info Junkie

USA
26022 Posts

Posted - 11/21/2006 :  12:20:09   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
quote:
Originally posted by ergo123

I didn't bother bringing them up as I already know their answers...
Nifty! You already know the answers to questions that neuroscientists, physchologists and philosophers are still arguing about? Please, do tell!

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

HalfMooner
Dingaling

Philippines
15831 Posts

Posted - 11/21/2006 :  13:28:15   [Permalink]  Show Profile Send HalfMooner a Private Message
Why all the talk about "anthropomorphizing?" Phooey! Indeed, human-like emotions are built into the very elements, and the laws of nature.

Who would deny that even an atom can be excited? That electrical circuits aren't simply our dumb slaves, but always demonstrate resistance? That many materials show a strong reluctance to be magnetized?

I could go on and on, except I ran out of examples.


Biology is just physics that has begun to smell bad.” —HalfMooner
Here's a link to Moonscape News, and one to its Archive.
Edited by - HalfMooner on 11/21/2006 13:28:59
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/21/2006 :  14:32:55   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
quote:
Originally posted by ergo123

Well I'm glad my raising the issue of desires and AI has generated such a rich discussion.

Regarding davie's comments of "Maybe it's better to just ask "what is a desire?" but not in a definition sense, but rather how do desires originate and act in our heads?" and the comment of what makes a reflex different from a desire: These are all important questions for this topic. I didn't bother bringing them up as I already know their answers--but feel free to discuss them amongst yourselves.

Why don't you clue in the psychologists, philosophers etc who are researching these exact questions and clue them in. There's a nobel prize waiting if ever there was one. Or how about starting to clue us in? I know we're but lowly idiots, not deserving the attention of such a great mind as yours, but maybe you could just pass a snippet of your all-encompassing wisdom to us?

quote:
Tom, your phone example is good in that it describes a simple desire in action. Of course, it is so simple it makes one wonder, like you did, what's the point. But there is a point--especially when related to more complex tasks.

Does it? Is my example really a 'desire in action' or is such a programmed respons not really a desire?

quote:
Our desires help us focus. Our desires help to filter out data from our senses data we have learned do not help us attain our desires. Of course, we often see this focus go astray. This could be because our desires do not really help us focus (i.e., I'm wrong on this point), or it could be a result of competing desires. And while the former is possible, I find the latter more probable based on my personal experience with both. I think that for a complex task to be handled via AI, some sense of a desire needs to be incorporated--some sense of urgency that directs the program to cut some corners, but also the right corners.


How would programming a desire help more in focus, instead of making a decision tree. In fact, how would such a desire be any different then a decision tree, except a little more complex? We already are able to give computers faced with multiple tasks the tools to take decisions in which task to execute first (or to allocate the most memory to) without any programmed desire. How would a desire help in that, without adding a potential for distortion that we would not want?

Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

Dr. Mabuse
Septic Fiend

Sweden
9688 Posts

Posted - 11/21/2006 :  15:03:46   [Permalink]  Show Profile  Send Dr. Mabuse an ICQ Message Send Dr. Mabuse a Private Message
quote:
Originally posted by Dave W.
Of course, we can also go the other way: if I remember correctly, someone now has a fairly accurate model of a single neuron that uses most of the resources of a desktop PC. If we connect a trillion or so such computer models together (via an appropriate network that mimics synapses), will "desires" (or even just an ego) pop out as an emergent property? Is it some sort of "hundredth monkey" phenomenon in which 999 billion computer neurons won't result in a "mind," but adding another billion (or so) will?

I remember we had a thread here on SFN where we calculated when the computing power on Earth would allow a full-scale, real time, emulation of the human brain.
Assuming More's Law extending to computing power, we'll reach that point around 2025-2027 (don't recall exact year).
I guess we'll have to wait and see...

Dr. Mabuse - "When the going gets tough, the tough get Duct-tape..."
Dr. Mabuse whisper.mp3

"Equivocation is not just a job, for a creationist it's a way of life..." Dr. Mabuse

Support American Troops in Iraq:
Send them unarmed civilians for target practice..
Collateralmurder.
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  16:25:39   [Permalink]  Show Profile Send ergo123 a Private Message
Tom: You are hysterical... If you don't know the answer to a question it must be because the answer will win someone the Nobel Prize! That's the funniest thing I've heard all day.

I would happily clue you and davie in, but I would probably just be accused of trolling...

And no, your example isn't really a desire in action--it describes, at a simple level, a desire in action.

Desires work better than trees because decision trees are hierarchical and sequential processes. As such, they don't map well to how the brain actually works, and therefore, do not take advantage of how the brain is able to work.

Well, now you are moving the goal posts, as davie likes to say. Now you want an AI desire without unwanted distortion. Before I can help you there, you will have to define this "unwanted" "distortion."

No witty quotes. I think for myself.
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/21/2006 :  17:43:24   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
quote:
Originally posted by ergo123

Tom: You are hysterical... If you don't know the answer to a question it must be because the answer will win someone the Nobel Prize! That's the funniest thing I've heard all day.

I'm not saying that because it is a question I cannot answer, Ergo. What's so hard to understanding that. It was clear enough in the post I wrote. I'll repeat the reason again: You'll get a nobel prize because noone else has come up with a conclusive answer yet. Philophers, psychologists, they're all still debating the question of what desire actually is. AI programmers would love to know, since then they would know how to make it. So if you know, you apparantly have knowledge that noone else in this world who has studied the question has. Since it is a question that has been actively debated in a number of circles, this is definitely something you should share with the world.

So Ergo, please share with us the knowledge that so many psychologists and philosophers have so actively sought but not found. And Ergo, please do not twist my words again. The reason you are called a troll is not because people do not agree with you, but because you continuously claim misrepresent other people. Another reason people call you a troll is actively illustrated by you in the next few lines:

quote:
I would happily clue you and davie in, but I would probably just be accused of trolling...

No, you wouldn't. You are accused of trolling because you make numerous statements you cannot back up. I have little doubt that I'll be waiting for your great solution on what desire is for infinity. But hey, you can always surprise me.

quote:
And no, your example isn't really a desire in action--it describes, at a simple level, a desire in action.

So that is your answer. A desire is nothing but a programmed response?

quote:
Desires work better than trees because decision trees are hierarchical and sequential processes. As such, they don't map well to how the brain actually works, and therefore, do not take advantage of how the brain is able to work.

And desires would work better? How would a desire work other then a decision tree?

quote:
Well, now you are moving the goal posts, as davie likes to say. Now you want an AI desire without unwanted distortion. Before I can help you there, you will have to define this "unwanted" "distortion."


You say desires work better then things without desires. Even ignoring the point that you have yet to explain why they would work any differently then a decision tree, if you have something more complex where a decision tree doesn't work, according to you, unwanted desires are a logical side effect. Also, I didn't bring up the problem of competing desires, you did.

Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

Dave W.
Info Junkie

USA
26022 Posts

Posted - 11/21/2006 :  19:16:29   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
quote:
Originally posted by ergo123

I would happily clue you and davie in, but I would probably just be accused of trolling...
It's the statement quoted above which is clearly trolling, with "the answers" as obvious bait. You don't actually have any real answers (at best, you've got someone else's speculation and/or hypothesis), you just want to see some people dance around pleading, "please, pleeeeeeze, Stevie, won't you grace us with your tremendous intellect and bestow upon us the ultimate resolution to an age-old conundrum?" You couldn't have gone any deeper into schoolyard bullying tactics than your statement (quoted above), ergo. You've shown your true colors once again.

The only variable remaining is how much longer we are going to tolerate your behaviour.

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

marfknox
SFN Die Hard

USA
3739 Posts

Posted - 11/21/2006 :  20:11:25   [Permalink]  Show Profile  Visit marfknox's Homepage  Send marfknox an AOL message Send marfknox a Private Message
Tom, Dave! Listen to Kil's advice and stop feeding the troll. This conversation is way more interesting without him.

Dave wrote:
quote:
While all the evidence we've got right now points to mind being an emergent property of a brain, we won't really nail it down until we can build a fake brain which acts like a real one.
But even then we won't know, right? I mean, that's why the whole question over how to build a sentient machine - or if it is even possible to build a sentient machine - has stumped so many smart guys; once we have AI sophisticated enough to mimic human emotions and thoughts convincingly, we won't really be able to know whether a sentient mind has truly been created because the only way to know that for sure is to be inside one's mind. The reason we assume humans are sentient is because each of us individuals is sentient, and we figure that since we're the same type of animal and behaving similarly enough to other humans, all humans must be sentient. There is a similar problem with evaluating the intelligence of animals. So many of their behaviors seem to signify a higher order or intelligence and the beginning of sentience, but could also be the result of more primitive functions, such as instinct or trial and error.

I read a great book on the subject of AI called The Age of Spiritual Machines by Ray Kurtzweil. He takes things a bit far (a bit of a true believer Transhumanist, his latest book is The Singularity Is Near: When Humans Transcend Biology), but he's also a scientist at MIT who studied AI and knows his shit. In Spiritual Machines, he writes:
quote:
This spector is not yet here. But with the emergence of computers that truly rival and exceed the human brain in complexity will come a corresponding ability of machines to understand and respond to abstractions and subtleties. Human beings appear to be complex in part because of our competing internal goals. Values and emotions represent goals that often conflict with each other, and are an unavoidable by-product of the levels of abstraction that we deal with as human beings. As computers adhieve a comparable -- and greater -- level of complexity, and as they are increasingly derived at least in part from models of human intelligence, they, too, will necessarily utilize goals with implicit values and emotions, although not necessarily the same values and emotions that humans exibit.

A variety of philosophical issues will emerge. Are computers thinking, or are they just calculating? Conversely, are human beings thinking, or are they just calculating? The human brain presumably follows the laws of physics, so it must be a machine, albeit a very complex one. Is there an inherent difference between human thinking and machine thinking? To pose the question another way, once computers are as complex as the human brain, and can match the human brain in subtlety and complexity of thought, are we to consider them conscious? This is a difficult question to pose, and some philosophers believe it is not a meaningful question; others believe it is the only meaningful question in philosophy.
Kurzweil then goes on to give the example of humans having their brains scanned and uploaded to artificial bodies, and how that may indeed become an issue for people who believe that the "person" who wakes up with the artificial brain and goes forth to act as if they are the same person is "just calculating" and not really conscious. This question has been tackled in science fiction, such as in episodes of Star Trek where the holograms become sentient. I've been re-watching Babylon 5 recently, and there was an episode where alien parents refused to allow a doctor to perform surgery on their sick son because in their religion they believed that his soul would escape. When the doctor did the surgery without their permission, they refused to believe that their child was still really their child and murdered him. Kurtzweil is an optimist, saying that "They (AI) will appear to have their own free will. They will claim to have spiritual experiences. And people -- those still using carbon-based neurons or otherwise -- will believe them." I'm not so sure about both our ability to develop such sophisticated AI (or what the purpose of doing so would be - see Dave's reference to Douglas Adams), nor am I so sure that all humans will accept AI as conscious beings. Kurtzweil thinks this won't be an issue because humans and machines will combine. But to me, at that point we aren't human anymore so it's not really something conceivable. It wouldn't be good or bad, but it would most certainly be the end of humanity as we know it.

"Too much certainty and clarity could lead to cruel intolerance" -Karen Armstrong

Check out my art store: http://www.marfknox.etsy.com

Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  21:31:26   [Permalink]  Show Profile Send ergo123 a Private Message
quote:
Originally posted by tomk80

quote:
Originally posted by ergo123

Tom: You are hysterical... If you don't know the answer to a question it must be because the answer will win someone the Nobel Prize! That's the funniest thing I've heard all day.

I'm not saying that because it is a question I cannot answer, Ergo. What's so hard to understanding that. It was clear enough in the post I wrote. I'll repeat the reason again: You'll get a nobel prize because no one else has come up with a conclusive answer yet. Philosophers, psychologists, they're all still debating the question of what desire actually is. AI programmers would love to know, since then they would know how to make it. So if you know, you apparantly have knowledge that no one else in this world who has studied the question has. Since it is a question that has been actively debated in a number of circles, this is definitely something you should share with the world.


I've got to be honest with you Tom, it's not as funny when you use that joke again so soon...

quote:
So Ergo, please share with us the knowledge that so many psychologists and philosophers have so actively sought but not found. And Ergo, please do not twist my words again. The reason you are called a troll is not because people do not agree with you, but because you continuously claim misrepresent other people.


Ahhh. So then davieW is a troll? Because he twists people's words and misrepresents their points.


quote:
quote:
And no, your example isn't really a desire in action--it describes, at a simple level, a desire in action.

So that is your answer. A desire is nothing but a programmed response?




quote:
quote:
Desires work better than trees because decision trees are hierarchical and sequential processes. As such, they don't map well to how the brain actually works, and therefore, do not take advantage of how the brain is able to work.

And desires would work better? How would a desire work other than a decision tree?


You wouldn't believe me if I told you.

quote:
quote:
Well, now you are moving the goal posts, as davie likes to say. Now you want an AI desire without unwanted distortion. Before I can help you there, you will have to define this "unwanted" "distortion."


You say desires work better than things without desires. Even ignoring the point that you have yet to explain why they would work any differently than a decision tree, if you have something more complex where a decision tree doesn't work, according to you, unwanted desires are a logical side effect. Also, I didn't bring up the problem of competing desires, you did.



No. I never said unwanted desires are a side effect of having something more complex where a decision tree doesn't work. Where did you come up with that? I brought up competing desires because they exist for us.

No witty quotes. I think for myself.
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  21:38:50   [Permalink]  Show Profile Send ergo123 a Private Message
quote:
Originally posted by Dave W.

quote:
Originally posted by ergo123

I would happily clue you and davie in, but I would probably just be accused of trolling...
It's the statement quoted above which is clearly trolling, with "the answers" as obvious bait. You don't actually have any real answers (at best, you've got someone else's speculation and/or hypothesis), you just want to see some people dance around pleading, "please, pleeeeeeze, Stevie, won't you grace us with your tremendous intellect and bestow upon us the ultimate resolution to an age-old conundrum?" You couldn't have gone any deeper into schoolyard bullying tactics than your statement (quoted above), ergo. You've shown your true colors once again.

The only variable remaining is how much longer we are going to tolerate your behaviour.



Actually davie, I am not looking for anyone to dance and say "please, pleeeeeeze, Stevie, won't you grace us with your tremendous intellect and bestow upon us the ultimate resolution to an age-old conundrum?" You remind me of those "believers" who unable to see the difference between what is true from what they want to see as true.

But of course you know that I don't force anyone to respond to my posts. I've actually asked you to stop responding to my posts. But I guess in your mind's eye that is trolling as well.

No witty quotes. I think for myself.
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  21:52:47   [Permalink]  Show Profile Send ergo123 a Private Message
quote:
Originally posted by marfknox
Dave wrote:
quote:
While all the evidence we've got right now points to mind being an emergent property of a brain, we won't really nail it down until we can build a fake brain which acts like a real one.
But even then we won't know, right? I mean, that's why the whole question over how to build a sentient machine - or if it is even possible to build a sentient machine - has stumped so many smart guys; once we have AI sophisticated enough to mimic human emotions and thoughts convincingly, we won't really be able to know whether a sentient mind has truly been created because the only way to know that for sure is to be inside one's mind. The reason we assume humans are sentient is because each of us individuals is sentient, and we figure that since we're the same type of animal and behaving similarly enough to other humans, all humans must be sentient. There is a similar problem with evaluating the intelligence of animals. So many of their behaviors seem to signify a higher order or intelligence and the beginning of sentience, but could also be the result of more primitive functions, such as instinct or trial and error.


Why can't instinct and trial and error be part of sentience? Humans have remnants of very primitive brains and we have hard-wired instincts that impact our learning.

If AI is to succeed, they better model and replicate the more primitive functions of the brain as they are closely linked to the values and emotional goals mentioned in the quoted text below. Also, if AI stops at modeling the human brain, they are not likely to succeed.

quote:
I read a great book on the subject of AI called The Age of Spiritual Machines by Ray Kurtzweil. He takes things a bit far (a bit of a true believer Transhumanist, his latest book is The Singularity Is Near: When Humans Transcend Biology), but he's also a scientist at MIT who studied AI and knows his shit. In Spiritual Machines, he writes:
quote:
This spector is not yet here. But with the emergence of computers that truly rival and exceed the human brain in complexity will come a corresponding ability of machines to understand and respond to abstractions and subtleties. Human beings appear to be complex in part because of our competing internal goals. Values and emotions represent goals that often conflict with each other, and are an unavoidable by-product of the levels of abstraction that we deal with as human beings. As computers adhieve a comparable -- and greater -- level of complexity, and as they are increasingly derived at least in part from models of human intelligence, they, too, will necessarily utilize goals with implicit values and emotions, although not necessarily the same values and emotions that humans exibit.

A variety of philosophical issues will emerge. Are computers thinking, or are they just calculating? Conversely, are human beings thinking, or are they just calculating? The human brain presumably follows the laws of physics, so it must be a machine, albeit a very complex one. Is there an inherent difference between human thinking and machine thinking? To pose the question another way, once computers are as complex as the human brain, and can match the human brain in subtlety and complexity of thought, are we to consider them conscious? This is a difficult question to pose, and some philosophers believe it is not a meaningful question; others believe it is the only meaningful question in philosophy.
Kurzweil then goes on to give the example of humans having their brains scanned and uploaded to artificial bodies, and how that may indeed become an issue for people who believe that the "person" who wakes up with the artificial brain and goes forth to act as if they are the same person is "just calculating" and not really conscious. This question has been tackled in science fiction, such as in episodes of Star Trek where the holograms become sentient. I've been re-watching Babylon 5 recently, and there was an episode where alien parents refused to allow a doctor to perform surgery on their sick son because in their religion they believed that his soul would escape. When the doctor did the surgery without their permission, they refused to believe that their child was still really their child and murdered him. Kurtzweil is an optimist, saying that "They (AI) will appear to have their own free will. They will claim to have spiritual experiences. And people -- those still using carbon-based neurons or otherwise -- will believe them." I'm not so sure about both our ability to develop such sophisticated AI (or what the purpose of doing so would be - see Dave's reference to Douglas Adams), nor am I so sure that all humans will accept AI as conscious beings. Kurtzweil thinks this won't be an issue because humans and machines will combine. But to me, at that point we aren't human anymore so it's not really something conceivable. It wouldn't be good or bad, but it would most certainly be the end of humanity as we know it.


No witty quotes. I think for myself.
Go to Top of Page

Dave W.
Info Junkie

USA
26022 Posts

Posted - 11/21/2006 :  22:19:23   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
quote:
Originally posted by marfknox

But even then we won't know, right? I mean, that's why the whole question over how to build a sentient machine - or if it is even possible to build a sentient machine - has stumped so many smart guys; once we have AI sophisticated enough to mimic human emotions and thoughts convincingly, we won't really be able to know whether a sentient mind has truly been created because the only way to know that for sure is to be inside one's mind. The reason we assume humans are sentient is because each of us individuals is sentient, and we figure that since we're the same type of animal and behaving similarly enough to other humans, all humans must be sentient.
Well, yeah. If you're going to head down into the depths of solipsism, then at the bottom of that well is the fact that there's no way for me to test the hypothesis that there exist any sentient beings other than me. After all, I know my senses aren't perfectly accurate, so it's possible that everything I sense is wrong. I find that giving such ideas the benefit of a doubt would lead to me twitching in a corner, doing nothing out of sheer uncertainty. Instead, I assume that the other people I meet are real, and I further assume that they're also sentient for the very reasons you state.

And that's all that the Turing Test tests. It pits computer against person, and asks the computer to fake humanity well enough to fool the person into thinking that the computer is actually another person. The same standard of assumption you wrote about. After all, because we can't know every detail of every thing, all science asks when categorizing things is if, for all intents and purposes, a particular thing behaves as if it belongs to a certain category.
quote:
There is a similar problem with evaluating the intelligence of animals. So many of their behaviors seem to signify a higher order or intelligence and the beginning of sentience, but could also be the result of more primitive functions, such as instinct or trial and error.
Sure, but we've got different tests for animals. And nothing, of course, is foolproof.
quote:
Kurzweil then goes on to give the example of humans having their brains scanned and uploaded to artificial bodies, and how that may indeed become an issue for people who believe that the "person" who wakes up with the artificial brain and goes forth to act as if they are the same person is "just calculating" and not really conscious. This question has been tackled in science fiction, such as in episodes of Star Trek where the holograms become sentient.
Or in a Next Gen episode where Data's "mother" turns out to be another android programmed to shut down and "forget" when anything happens which might reveal to her that she's an android. And the oldie-but-goodie, how do we know that we're not just sophisticated computer programs acting within a giant and highly detailed simulator?

On a side note, this reminds me of one of my personal favorites which has nothing to do with simulations: if a Star Trek-style teleporter actually disassembles your atoms and only beams information about them to the receiving end, which reconstructs you from an entirely different supply of atoms, then what's really happening is that you are being killed, and a copy created somewhere else. Is the copy you, or a bizarre sort of clone? What if an accident happened, and you weren't disassembled during teleportation, but still "rebuilt" on the receiving end? Which copy would be "you?" (That's yet another Next Gen episode, where it happens to Riker, and he doesn't like himself.)

But I think the same answer applies (to all but the last case, at least): if, for all intents and purposes, a sentient being exists, then there's little reason to not consider it a sentient being. I mean, we're not talking about considering amputees with prosthetics to be "less human" than fully-limbed people, so it matters little if the amputation happens to have been the brain, does it?
quote:
I've been re-watching Babylon 5 recently, and there was an episode where alien parents refused to allow a doctor to perform surgery on their sick son because in their religion they believed that his soul would escape. When the doctor did the surgery without their permission, they refused to believe that their child was still really their child and murdered him. Kurtzweil is an optimist, saying that "They (AI) will appear to have their own free will. They will claim to have spiritual experiences. And people -- those still using carbon-based neurons or otherwise -- will believe them." I'm not so sure about both our ability to develop such sophisticated AI (or what the purpose of doing so would be - see Dave's reference to Douglas Adams), nor am I so sure that all humans will accept AI as conscious beings. Kurtzweil thinks this won't be an issue because humans and machines will combine. But to me, at that point we aren't human anymore so it's not really something conceivable. It wouldn't be good or bad, but it would most certainly be the end of humanity as we know it.
Well, this brings up an interesting question, even if we don't assume that "humans and machines will combine": will the laws regarding murder need to be changed to accomodate machines which - for all intents and purposes other than what they're made of, and perhaps what physical acts they're capable of - act like people? I can take an axe or shotgun to my desktop computer at will, because it's my property and doesn't act anything like a human does. Was the disassembly of Hal's core in 2001 an act of murder in self-defense (since Dave never once gave thought that Hal would be re-activated nine years later - another strange legal question)?

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

H. Humbert
SFN Die Hard

USA
4574 Posts

Posted - 11/21/2006 :  22:56:52   [Permalink]  Show Profile Send H. Humbert a Private Message
quote:
Originally posted by Dave W.
On a side note, this reminds me of one of my personal favorites which has nothing to do with simulations: if a Star Trek-style teleporter actually disassembles your atoms and only beams information about them to the receiving end, which reconstructs you from an entirely different supply of atoms, then what's really happening is that you are being killed, and a copy created somewhere else.
I remembering hearing in grade school that every cell in the human body is replaced every 7 years. Using the same line of thinking, I wondered if in 7 years time I would be aware that nothing about me was the same.


"A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." --Demosthenes

"The first principle is that you must not fool yourself - and you are the easiest person to fool." --Richard P. Feynman

"Face facts with dignity." --found inside a fortune cookie
Go to Top of Page

marfknox
SFN Die Hard

USA
3739 Posts

Posted - 11/21/2006 :  23:17:58   [Permalink]  Show Profile  Visit marfknox's Homepage  Send marfknox an AOL message Send marfknox a Private Message
Dave wrote:
quote:
Well, yeah. If you're going to head down into the depths of solipsism, then at the bottom of that well is the fact that there's no way for me to test the hypothesis that there exist any sentient beings other than me. After all, I know my senses aren't perfectly accurate, so it's possible that everything I sense is wrong. I find that giving such ideas the benefit of a doubt would lead to me twitching in a corner, doing nothing out of sheer uncertainty.
The question of whether AI or artificial bodies with scanned and uploaded memories - both of which might act as if they are conscious - are actually conscious is not exactly the same as questioning whether people around us are conscious. Questioning the latter means putting ourselves, our own individual mind and ego at the center of things. It means entertaining such scenarios as the Matrix or The Truman Show or something far more imaginative than we've ever seen in movies. The possibility that our incomplete understanding of how consciousness arises might cause us to make some seriously big mistakes without even realizing it, is much more reasonable.

We can imagine a machine which is build to look and move exactly like a human being. We can also imagine that machine being programmed to act exactly like a human being while lacking the self awareness that it mimics with its behavior.

If we can imagine that, why not imagine that a destructive brain scan then uploaded into an artificial brain and body might transfer enough information to function as it is supposed to, without including the actual consciousness. This would be synonymous to the hologram characters from Star Trek who are programmed to act like certain real people for the purpose of entertaining or informing the visitor, but who are not actually self aware. So what is the fiction, the holograms who come alive or the ones who never do? And can we ever really know that answer? The nightmare scenario could be that we eventually upload all human “minds” into artificial brains and bodies where they seem to continue being human, but in reality are only behaving as if aware, while truly unaware of themselves. And the scariest part of that nightmare is that nobody would even realize it was happening.

Of course I tend to agree philosophically with Kurtzweil. I think machines will eventually be as sentient as humans. Don't know if we'll ever get to the point where we can actually scan our own minds and put them into another set of hardware, but I tend to think it is possible and even probable given enough time.

quote:
But I think the same answer applies (to all but the last case, at least): if, for all intents and purposes, a sentient being exists, then there's little reason to not consider it a sentient being. I mean, we're not talking about considering amputees with prosthetics to be "less human" than fully-limbed people, so it matters little if the amputation happens to have been the brain, does it?
It matters quite a lot at this stage in the game because right now relatively little is understood about the brain; certainly far less than is understood about arms and legs.

"Too much certainty and clarity could lead to cruel intolerance" -Karen Armstrong

Check out my art store: http://www.marfknox.etsy.com

Edited by - marfknox on 11/21/2006 23:23:35
Go to Top of Page

marfknox
SFN Die Hard

USA
3739 Posts

Posted - 11/21/2006 :  23:20:59   [Permalink]  Show Profile  Visit marfknox's Homepage  Send marfknox an AOL message Send marfknox a Private Message
Humbert wrote:
quote:
I remembering hearing in grade school that every cell in the human body is replaced every 7 years. Using the same line of thinking, I wondered if in 7 years time I would be aware that nothing about me was the same.
I heard that too and contemplated similar notions. I later learned that in fact not every cell in the body is replaced every 7 years, and in fact, some cells are never replaced - coincidentally, they are the ones relevant to this discussion: From here: http://en.wikipedia.org/wiki/Brain_cells
quote:
Brain cells remain in the beginning stage of interphase of cell reproduction for their life, and never divide. Instead, they develop by forming new synapses with other neurons.

"Too much certainty and clarity could lead to cruel intolerance" -Karen Armstrong

Check out my art store: http://www.marfknox.etsy.com

Go to Top of Page
Page: of 4 Previous Topic Topic Next Topic  
Previous Page | Next Page
 New Topic  Topic Locked
 Printer Friendly Bookmark this Topic BookMark Topic
Jump To:

The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.94 seconds.
Powered by @tomic Studio
Snitz Forums 2000