Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
 All Forums
 Our Skeptic Forums
 Astronomy
 Cool on many levels
 New Topic  Topic Locked
 Printer Friendly Bookmark this Topic BookMark Topic
Next Page
Author Previous Topic Topic Next Topic
Page: of 4

Antigone
New Member

44 Posts

Posted - 11/16/2006 :  19:44:59  Show Profile Send Antigone a Private Message
One small step closer to The Terminator and I Robot , one giant leap for Rover-kind

http://www.livescience.com/technology/061116_resillient_robot.html

I think this will be amazing tech to apply to missions to Mars. (and other bodies in our solar system).

I wonder if it can even be adjusted for use in cars, planes, etc?!


Mortui non dolent

HalfMooner
Dingaling

Philippines
15831 Posts

Posted - 11/16/2006 :  21:32:15   [Permalink]  Show Profile Send HalfMooner a Private Message
That sure is neat, Antigone! To my mind, the engineers have given the robot what is in effect a sense of pain, as it acts to keep the robot from using a damaged limb without "limping." And the idea of an internal computer model that the robot can refer to is fascinating. Reminds me of "the theory of self."


Biology is just physics that has begun to smell bad.” —HalfMooner
Here's a link to Moonscape News, and one to its Archive.
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/19/2006 :  15:56:28   [Permalink]  Show Profile Send ergo123 a Private Message
quote:
Originally posted by Antigone

One small step closer to The Terminator and I Robot , one giant leap for Rover-kind

http://www.livescience.com/technology/061116_resillient_robot.html

I think this will be amazing tech to apply to missions to Mars. (and other bodies in our solar system).

I wonder if it can even be adjusted for use in cars, planes, etc?!





In some ways, redudnant systems built into cars and planes have a similar effect. My car has a couple of fail safes built in as well. One is a rev limiter that keep electricity from reaching the spark plugs if I try to rev the engine beyond 6400 rpm. The other kills the engine completely if any of the major systems fail so as not to damage the engine.

Some car manufacturers have models that will inflate a tire with a puncture while you drive! I think both Jaguar and BMW make cars with that feature. I would like the new Lexus feature that paraller parks the car for you since every car I seem to drive has a problem in this area...

No witty quotes. I think for myself.
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/19/2006 :  18:40:01   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
I'm just wondering when they'll start giving robots a reward system a la Douglas Adams, so they can perform their services for us "with the satisfaction of a job well done".

Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/20/2006 :  15:40:16   [Permalink]  Show Profile Send ergo123 a Private Message
quote:
Originally posted by tomk80

I'm just wondering when they'll start giving robots a reward system a la Douglas Adams, so they can perform their services for us "with the satisfaction of a job well done".



That's interesting tom. I was watching a tv show about particular problems in the field of AI a few years back. I wondered if some of the problems ai scientists have with their robots relate to the fact that the robots have no internal desires.

Human internal desires drive all of our behavior and likely contribute to data management schemes we employ to filter all the info that comes our way. At least some AI problems seem to arise due to info processing speed and the huge amounts that need to be processed.

No witty quotes. I think for myself.
Go to Top of Page

Ricky
SFN Die Hard

USA
4907 Posts

Posted - 11/20/2006 :  20:48:55   [Permalink]  Show Profile  Send Ricky an AOL message Send Ricky a Private Message
quote:
I wondered if some of the problems ai scientists have with their robots relate to the fact that the robots have no internal desires


As complex as AI programs can get, and they certainly have, it's still just a machine running 0's and 1's through a processor, moving and manipulating pieces of data to produce output. There is no such thing as "desire" for a computer... yet.

And as far as self correcting machines go, engineers are way behind. Computer scientists invented that years ago. It's call chkdsk.

Why continue? Because we must. Because we have the call. Because it is nobler to fight for rationality without winning than to give up in the face of continued defeats. Because whatever true progress humanity makes is through the rationality of the occasional individual and because any one individual we may win for the cause may do more for humanity than a hundred thousand who hug their superstitions to their breast.
- Isaac Asimov
Go to Top of Page

H. Humbert
SFN Die Hard

USA
4574 Posts

Posted - 11/20/2006 :  21:06:41   [Permalink]  Show Profile Send H. Humbert a Private Message
quote:
Originally posted by Ricky
There is no such thing as "desire" for a computer... yet.

Depends on how you define "desire." If you think of desire as simply the motivation to fulfill an objective, then robots have desires.


"A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." --Demosthenes

"The first principle is that you must not fool yourself - and you are the easiest person to fool." --Richard P. Feynman

"Face facts with dignity." --found inside a fortune cookie
Go to Top of Page

Dave W.
Info Junkie

USA
26020 Posts

Posted - 11/20/2006 :  22:02:54   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
You're anthropomorphizing, H.

Replace the CPU in some robot with a person stuck in a box. Bits of paper with strange symbols on them come in through a slot in the box, and the guy looks up in a big book what symbol he's got to write on a piece of paper and stuff out another slot, based upon the current (and past) symbols. He's got no understanding whatsoever of what the symbols mean, his only desire is to "translate" them correctly.

So even though the "objective" of a certain robot might be "pick up the orange ball and place it in the basket," our guy-in-a-box CPU has no knowledge of that goal, or whether it has been achieved. His only motivation is completely independent of the behaviour of the complete system, of which he is utterly ignorant.

But even if that's what you meant by "simply the motivation to fulfill an objective," then you've anthropomorphized the CPU itself, since taking things a step lower, each individual transistor in a CPU can be modeled as a guy-in-a-box, with the symbols on the paper now representing electron flow. Once again, "fulfilling an objective" of correctly adding one and two to get three is unknown to any of the dozens of guys-in-boxes that make up a single adder within the CPU.

We can, of course, continue this down to the level of quarks and photons, but suggestions that "desires" exist on such low levels seems more and more ridiculous the deeper one goes.

Of course, we can also go the other way: if I remember correctly, someone now has a fairly accurate model of a single neuron that uses most of the resources of a desktop PC. If we connect a trillion or so such computer models together (via an appropriate network that mimics synapses), will "desires" (or even just an ego) pop out as an emergent property? Is it some sort of "hundredth monkey" phenomenon in which 999 billion computer neurons won't result in a "mind," but adding another billion (or so) will?

These are questions which have vexed philosophers for centuries. I don't think the answer lies in the definition, otherwise, we could inbue subatomic particles with "desires." For example, "electrons want to be as far apart from each other as possible," when explaining charge repulsion. But this does nothing but strip "want" ("desire") of its meaning.

(An article I read not too long ago bemoaned the sorry state of science eduction when inanimate objects acting and reacting to forces are given "desires" by teachers as a method of trying to help people understand the concepts involved. "Water seeks its own level" is a familiar example of this, as the individual water molecules don't have a clue as to what "level" is, and don't actually "seek" anything. That saying is obviously an easy-to-remember shorthand for the actual hydrodynamics involved, but by implying that water acts with intention like a goal-seeking being, people who teach it are short-changing their students and probably confusing them.)

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

H. Humbert
SFN Die Hard

USA
4574 Posts

Posted - 11/20/2006 :  22:19:30   [Permalink]  Show Profile Send H. Humbert a Private Message
I was hoping my comment might stimulate a good discussion.

I only have time now for a quick retort:

  • Some people would argue that it's incorrect to even think of humans having desires, since our behaviors (and seeming consciousness), though more complicated, may be no more autonomous than water seeking it's own level.

  • Computer viruses desire to piss me off and they are very good at it.

  • You look Number 5 in the eye cameras and tell me he's not alive.

  • Edgar too.



"A man is his own easiest dupe, for what he wishes to be true he generally believes to be true." --Demosthenes

"The first principle is that you must not fool yourself - and you are the easiest person to fool." --Richard P. Feynman

"Face facts with dignity." --found inside a fortune cookie
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  04:01:55   [Permalink]  Show Profile Send ergo123 a Private Message
Dave said: "You're anthropomorphizing, H."

Well I should hope so. After all, the topic here is AI!!

Dave, your long discourse on cpus epitomizes the weakness of the purely rational when it comes to creative thinking.

No witty quotes. I think for myself.
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/21/2006 :  06:22:19   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
quote:
Originally posted by Dave W.

You're anthropomorphizing, H.

Replace the CPU in some robot with a person stuck in a box. Bits of paper with strange symbols on them come in through a slot in the box, and the guy looks up in a big book what symbol he's got to write on a piece of paper and stuff out another slot, based upon the current (and past) symbols. He's got no understanding whatsoever of what the symbols mean, his only desire is to "translate" them correctly.

So even though the "objective" of a certain robot might be "pick up the orange ball and place it in the basket," our guy-in-a-box CPU has no knowledge of that goal, or whether it has been achieved. His only motivation is completely independent of the behaviour of the complete system, of which he is utterly ignorant.

But even if that's what you meant by "simply the motivation to fulfill an objective," then you've anthropomorphized the CPU itself, since taking things a step lower, each individual transistor in a CPU can be modeled as a guy-in-a-box, with the symbols on the paper now representing electron flow. Once again, "fulfilling an objective" of correctly adding one and two to get three is unknown to any of the dozens of guys-in-boxes that make up a single adder within the CPU.

But then, this also holds for humans. Although we obviously have desires, those are directed by neurons that have no idea what is happening. They just get signals in and send signals out, just like our man in the box. Can we then not have desires?

quote:
We can, of course, continue this down to the level of quarks and photons, but suggestions that "desires" exist on such low levels seems more and more ridiculous the deeper one goes.

Of course, we can also go the other way: if I remember correctly, someone now has a fairly accurate model of a single neuron that uses most of the resources of a desktop PC. If we connect a trillion or so such computer models together (via an appropriate network that mimics synapses), will "desires" (or even just an ego) pop out as an emergent property? Is it some sort of "hundredth monkey" phenomenon in which 999 billion computer neurons won't result in a "mind," but adding another billion (or so) will?

I agree that H. is anthropomorphizing too much, but I do not think that your 'man in the box example specifically illustrates this. After all, our neurons are all small 'man in boxes'.

So it seems obvious to me, that 'desire' is a concept that comes into play with larger cooperations of 'man in boxes', and that computers are not automatically distinguished from that. However, I have no good ideas at present on how to delineate the problem.

My first hunch would be to say that desires not some sort of consciousness of oneself to be desires. The problem I see with this is my cat. My cat desires to be stroked. But is this a conscious choice? I don't know. The decision process is probably different from me turning on the computer because I want to start working. But is it reducible to just a 'man in the box' decision tree? In if it is, would it not necessitate for us to indeed project the term 'desires all the way down to molecules, atoms and quarks?


Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/21/2006 :  06:27:46   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
quote:
Originally posted by ergo123

Dave said: "You're anthropomorphizing, H."

Well I should hope so. After all, the topic here is AI!!

Dave, your long discourse on cpus epitomizes the weakness of the purely rational when it comes to creative thinking.


I disagree. It illustrates the problem in calling something a 'desire' quite well. Which is the exact problem here. If we talk about giving computers 'desires', what are we exactly talking about. If we think it would make a program work better (I don't, but that's a different question), it is important to explore what a 'desire' actually is.

I don't think adding 'desire' to a computer would make it function better. A program will do what it does to the utmost of it's ability, because that is what it has been programmed to do. A computer will just run through it's instructions, which it will do as optimally as possible.

Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

Dave W.
Info Junkie

USA
26020 Posts

Posted - 11/21/2006 :  09:36:43   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
quote:
Originally posted by H. Humbert

I was hoping my comment might stimulate a good discussion.
I figured as much. If the end result of ergo's irrational mode of thought is nothing more than snottily dismissive one-liners, he can keep it. Either that, or he just completely missed my point.
quote:
I only have time now for a quick retort:

  • Some people would argue that it's incorrect to even think of humans having desires, since our behaviors (and seeming consciousness), though more complicated, may be no more autonomous than water seeking it's own level.

Indeed, and it's difficult to argue otherwise other than to say, "but that there's 'something else' is self-evident!" What is self-evident is the feeling of "I want" (a basic desire). We all know it, the question is: does anything else know it? I feel pretty comfortable saying things like, "my dog wants to play," or "my cat wants to be scritched on the head," but not so much "my goldfish wants to hide in the treasure chest," even less so "that cockroach wants to hide from the light," and certainly not "these bacteria want to kill me."

Is there a clear test for where anthropomorphizing is appropriate and where it isn't? Maybe it's better to just ask "what is a desire?" but not in a definition sense, but rather how do desires originate and act in our heads?

It seems obvious that when you turn on your kitchen light in the middle of the night, the cockroach doesn't go through a process in which he imagines the possible outcomes of this change in his environment, weighs one against another and determines that his optimal course of action is to run under the stove. The "goal" he's fulfilling isn't one that he picked from a pool of hypotheticals, it's simply hard-coded into his scant neurons. An instinct.

Similarly, when I stub my toe, there's no "desire" on my part to shout and hop around, but I do it anyway. And I imagine that such behaviour serves an evolutionary purpose in that were the pain caused by some immediate threat (such as a snake), altering the rest of the pack could save the lives of the offspring.

So in my estimation, not all behaviours classify as "desires," since they don't serve to fulfill some personal goal, but only exist to aid survival and reproduction. For example, eating in its most-basic form is simply instinctual. Choosing to make and serve to your family every recipe in The Joy of Cooking in a year, on the other hand, is a qualitatively different sort of behaviour.
quote:
  • Computer viruses desire to piss me off and they are very good at it.

You're directing the intentional behaviour at the wrong target. The creators of computer viruses desire to piss you off.
quote:
  • You look Number 5 in the eye cameras and tell me he's not alive.

  • Edgar too.

Yeah, Andrew, also. This whole thing is, of course, one of the questions raised in that movie.

Edited to add: what about Joshua?
quote:
Originally posted by tomk80

My first hunch would be to say that desires not some sort of consciousness of oneself to be desires. The problem I see with this is my cat. My cat desires to be stroked. But is this a conscious choice? I don't know. The decision process is probably different from me turning on the computer because I want to start working. But is it reducible to just a 'man in the box' decision tree? In if it is, would it not necessitate for us to indeed project the term 'desires all the way down to molecules, atoms and quarks?
That is, indeed, part of what I was getting at: what makes a "desire" different from a reflex or an instinct? The latter two are easy to encode into CPUs as simple stimulus-response programs. But how would one go about coding "desire?" How do you give a chunk of electrified silicon a sense of satisfication for a job well done? I doubt it's as simple as "IF GOAL FULFILLED, INCREASE SPRING TENSION DURING WALKING SUBROUTINE."

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

tomk80
SFN Regular

Netherlands
1278 Posts

Posted - 11/21/2006 :  10:21:11   [Permalink]  Show Profile  Visit tomk80's Homepage Send tomk80 a Private Message
quote:
Originally posted by Dave W.

quote:
My first hunch would be to say that desires not some sort of consciousness of oneself to be desires. The problem I see with this is my cat. My cat desires to be stroked. But is this a conscious choice? I don't know. The decision process is probably different from me turning on the computer because I want to start working. But is it reducible to just a 'man in the box' decision tree? In if it is, would it not necessitate for us to indeed project the term 'desires all the way down to molecules, atoms and quarks?
That is, indeed, part of what I was getting at: what makes a "desire" different from a reflex or an instinct? The latter two are easy to encode into CPUs as simple stimulus-response programs. But how would one go about coding "desire?" How do you give a chunk of electrified silicon a sense of satisfication for a job well done? I doubt it's as simple as "IF GOAL FULFILLED, INCREASE SPRING TENSION DURING WALKING SUBROUTINE."


I don't know, why not? Why wouldn't it be as easy as, for example, defining a 'happyness scale' and making it dependent on some function. For example, we could define a happiness scale for my mobile phone. If my battery goes below a certain point, the happyness scale drops below 3, my telephone 'feels' unhappy, determines the cause of it's unhappyness and shouts for me to recharge it. I could imagine that in humans and animals, it might just be as easy as that, only because there are so many functions, the 'calculation' of 'happy' or 'unhappy' can become quite hard and thus sometimes hard to determine for us.

The question then becomes, why do it? I mean, why would you program such a happyness scale, if it is in fact just superfluous code. What would a general 'feeling' of happyness add to a program, that would be more usefull to us as a direct determination of problems and response to that?


[Edited to fix quoting - Dave W.]

Tom

`Contrariwise,' continued Tweedledee, `if it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.'
-Through the Looking Glass by Lewis Caroll-
Go to Top of Page

Dave W.
Info Junkie

USA
26020 Posts

Posted - 11/21/2006 :  10:53:34   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message
quote:
Originally posted by tomk80

The question then becomes, why do it? I mean, why would you program such a happyness scale, if it is in fact just superfluous code. What would a general 'feeling' of happyness add to a program, that would be more usefull to us as a direct determination of problems and response to that?
Why do humans have a "happiness scale" if it's superfluous? I don't think it is superfluous, for us, in that it probably conveys survival benefits.

And in the field of AI, the holy grail isn't problem-solving per se, but instead the creation of a set of code which can pass a Turing Test. I doubt that an emotion-free Spock-like artificial intelligence would be viewed as anything other than a computer program, no matter how smart it sounds.

Doug Adams, of course, took the notion to an absurd extreme (thankfully). Why would anyone want a door that was genuinely glad to have opened at the right time, when much simpler, cheaper and less neurotic hardware will perform the same job equally well?
"You watch this door," he muttered, "it's about to open again. I can tell by the intolerable air of smugness it suddenly generates."

With an ingratiating little whine the door slit open again and Marvin stomped through.

"Come on," he said.

The others followed quickly and the door slit back into place with pleased little clicks and whirrs.

"Thank you the marketing division of the Sirius Cybernetics Corporation," said Marvin and trudged desolately up the gleaming curved corridor that stretched out before them. "Let's build robots with Genuine People Personalities," they said. So they tried it out with me. I'm a personality prototype. You can tell can't you?"

Ford and Arthur muttered embarrassed little disclaimers.

"I hate that door," continued Marvin. "I'm not getting you down at all am I?"
Nah, the point isn't to actually go and do that sort of thing (not yet, at least). The point is simply one of discovery. While all the evidence we've got right now points to mind being an emergent property of a brain, we won't really nail it down until we can build a fake brain which acts like a real one.

It may also be the case that in order to obtain the self-directed goal-setting and problem-solving that we'd like to see in robots - traits necessary if we're to be able to redecorate and not have to spend hours retraining our Household Helperbots™ - they get emotions as a "side effect." If we can figure out where the damn things come from, it may be we will be able to predict that hooking upmty-ump "neuron" computers together would generate the phenomena of mind, ego and desires.

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

ergo123
BANNED

USA
810 Posts

Posted - 11/21/2006 :  11:08:13   [Permalink]  Show Profile Send ergo123 a Private Message
Well I'm glad my raising the issue of desires and AI has generated such a rich discussion.

Regarding davie's comments of "Maybe it's better to just ask "what is a desire?" but not in a definition sense, but rather how do desires originate and act in our heads?" and the comment of what makes a reflex different from a desire: These are all important questions for this topic. I didn't bother bringing them up as I already know their answers--but feel free to discuss them amongst yourselves.

Tom, your phone example is good in that it describes a simple desire in action. Of course, it is so simple it makes one wonder, like you did, what's the point. But there is a point--especially when related to more complex tasks.

Our desires help us focus. Our desires help to filter out data from our senses data we have learned do not help us attain our desires. Of course, we often see this focus go astray. This could be because our desires do not really help us focus (i.e., I'm wrong on this point), or it could be a result of competing desires. And while the former is possible, I find the latter more probable based on my personal experience with both. I think that for a complex task to be handled via AI, some sense of a desire needs to be incorporated--some sense of urgency that directs the program to cut some corners, but also the right corners.

No witty quotes. I think for myself.
Go to Top of Page
Page: of 4 Previous Topic Topic Next Topic  
Next Page
 New Topic  Topic Locked
 Printer Friendly Bookmark this Topic BookMark Topic
Jump To:

The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.64 seconds.
Powered by @tomic Studio
Snitz Forums 2000