Skeptic Friends Network

Username:
Password:
Save Password
Forgot your Password?
Home | Forums | Active Topics | Active Polls | Register | FAQ | Contact Us  
  Connect: Chat | SFN Messenger | Buddy List | Members
Personalize: Profile | My Page | Forum Bookmarks  
 All Forums
 Our Skeptic Forums
 Conspiracy Theories
 New general Technology?
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Previous Page | Next Page
Author Previous Topic Topic Next Topic
Page: of 3

Dave W.
Info Junkie

USA
26020 Posts

Posted - 11/29/2012 :  23:01:55   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message  Reply with Quote
Originally posted by energyscholar

Edited by - energyscholar on 11/29/2012 22:24:55
It's common Internet Forum Decency to point out what, precisely, you edited and why. Or just to append "Edited to add..." (or similar) at the end of your comment.

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 11/30/2012 :  13:21:29   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
Thanks for your patience. It's taken me a few days to respond. I needed some research time. Dave W. pointed out several abstruse technical flaws in some aspects of my hypothesis. I'll address those, and some other direct questions, before continuing with my narrative.

So let me get this straight: your hypothesis is that people have been secretly implementing and using a technology that has been publicly discussed for 14-plus years?


Yes, that's correct. I believe QNN technology has been in regular use since about 1995 or 1996. I guess ULTRA II ran from 1991 to 1995, and went into production circa 1996. Evidence for this thesis is currently scarce and inconclusive, but I suspect the historical record will eventually support it.

1. Google Search


Dave W. convinced me that my understanding of Google Search is deeply flawed. I still maintain a suspicion that some aspects of Google have a 'better than classical' performance, but I can't back it up with firm evidence. Dave W. obviously knows a lot more than I do about this topic.

I choose to drop this assertion, while maintaining my basic thesis.

2. DNA Sequencing


Dave W. convinced me that my understanding of DNA sequencing is deeply flawed. I took some time to research this topic, and must agree he seems to be correct. Braid topology has nothing to do with DNA braiding.

I maintain my assertion that QNN technology may be responsible for the way that private parties, using proprietary technology, were able to accelerate DNA sequencing for the Human Genome Project (HGP). I am changing my explanation for the reason (yes, I admit it , no need to belabor the point ), based on additional research. I think the pattern-matching abilities of some kind of neural network sped up DNA sequencing. This would be an example of some sort of AI learning of specific domain knowledge. I lack the technical know how to tell whether it was of the classical or quantum variety (CNN or QNN). It probably was not something that is in the public domain, since the scientists of the HGP did not seem to know about it.


3. Self-piloting vehicles

Hints? The military was showing off such vehicles publicly.


I did not know that, but I'm not surprised.

the type of neural net envisioned by Turing would be wildly inefficient and completely unable to take advantage of the properties of quantum computing that make QNNs interesting.


Full agreement.

It's actually the massive parallelism of the brain that results in its efficiency over that of ANNs. We can't approach having a network of even just billions of artificial neurons running simultaneously, just as a practical matter.


General note: The human brain has about 100 billion neurons.

Might a hypothetical QNN with, say, 100 billion neurons, be as capable as a human brain? Is there a reason why a QNN, of the sort I describe, this large or larger, is not possible? Stuart Kauffman seems to have quite a bit to say on this topic.

As Thor noted, this is again an argument from ignorance. You're proposing a "QNN of the gaps" argument, little different in style from the creationists' "god of the gaps."


I must acknowledge this is true, and that my argument, so far, is very weak. Please bear with me.

No, the Wikipedia article clearly shows that outperforming a human at just a single task would be classified as "Weak AI":
Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.
To be "Strong AI," a system must match or exceed a human's general intelligence.


Agreed. I was using hyperbole here. Nonetheless, I find it very suggestive that several different particular new AI technologies have, in the past few years, reached a state where they work as well as, or better than, a human being. I doubt any of us will be surprised when some cutting edge AI organization (WolframResearch, IBM, Google, et cetera) introduces additional skills or fields which equal or surpass human abilities. Also, some of these new skills (e.g. driving, math, Physics) are much more general than, for example, a chess program. If there exists a generalized method for AI to replicate or learn a broad range of human skills, such a system could lead to strong AI.

It seems there is something new at work here, that is not just the result of Moore's law. The requirements to 'know' all of mathematics and Physics, or to drive a car safely, or possibly to win at Jeopardy, are qualitatively different from the requirements for a chess program. Computer programmers can write algorithms to play chess. It is very much harder to write algorithms that interpret natural language questions as Physics problems and then solve them, which WolframAlpha does. This seems to require an understanding of language and Physics.

Just the Physics component alone seems impossibly difficult to even approach with conventional computer software methodology. There are dozens of different branches of physics to describe, and few of them lend themselves to algorithmic interpretation. It would take a team of many hundreds of software developers working for many years to make a decent dent in the problem, if it were possible at all. The resulting code base would be huge, brittle, and almost impossible to maintain. Clearly, there is a very different approach at work here. I guess the approach involves a learning system, something that is able to absorb information and turn it into skill.

Just because I don't know how it's done does not mean it requires some secret technology. I know something about AI, but I am no expert. Perhaps I am just ignorant of the cutting edge of AI to think that e.g. WolframAlpha is qualitatively different from a chess program. Can anyone explain how e.g. WolframAlpha is able to do what it does?

"Aliens done it" explains the available evidence precisely as well.


For the weak evidence I have provided so far, I agree that is true. I hope that, once we have reviewed all the available evidence (I'm getting to it!), people will agree my thesis is probably a better explanation than 'Aliens did it'. The timetable is a big part of this. I am still working on that but hope to publish it to this thread within the next week or so.

I wish to apologize in advance for deliberately withholding some of the evidence for a while. I do this because I have found that some of the evidence acquires a new, not-obvious-at-first-glance interpretation, once one knows my entire proposed thesis and story. Were I to provide this evidence before finishing the rest of the story it would be much less meaningful, and much easier to dismiss. Please bear with me.

QNNs which, in reality, would have to be orders of magnitude larger than anything even the hype from D-Wave claims to exist.


That is exactly my claim: that a QNN exists that is many orders of magnitude larger than anything claimed by D-Wave. I can't prove it, but I think I can show a plausible (if improbable) mechanism by which it operates, and plausibly explain how it got to be that way. I am well aware this seems an outrageous claim. Please bear with me.

And I hope you're getting the sort of feedback you expected.


Yes. Exactly. I would have been disappointed in the SkepticFriends community had I received a less skeptical response. I am a scientific skeptic myself. As I said clearly at the start, I know that I am making seemingly outrageous claims. I think most people willing to hear me out, and to inform themselves about the issues I bring up, will find that my claims seem less outrageous than they may seem right now.

For example, I believe we have established that the theoretical concept of Quantum Neural Networks has had a respectable reputation in Academia for at least 14 years. I suspect most readers did not know this, yet it is verifiably true.

For another example, I claim that QNN technology is based on quantum teleportation (QT). If one does not know that QT is now a routine real technology then one will be quick to dismiss such a claim as ridiculous. Once one knows that QT technology is real, that it is theoretically associated with both topological quantum computation and cryptography, and that some of the scientists who discovered it seem to be affiliated with DARPA, the claim seems less ridiculous. Perhaps still weird and unlikely, but no longer ridiculous.

For a third example, I claim that this sort of QNN technology exploits the principle of topological entanglement to get stable and long-lasting quantum entanglement. This seems a ridiculous claim, until I offer the physical system of a FQHE liquid as an example.

For a fourth example, I make the claim that current [secret] QNN technology allows for very large and stable quantum neural networks. Concerns about decoherence time make this claim seem ridiculous, unless one is aware that topological entanglement can persist. This theoretically allows for quantum computation systems orders of magnitude larger, and more stable, than anything yet established (or even hyped!) in public.

The topic as you've presented it so far isn't actually QNNs, though, it's about the groups you claim are using QNNs. One need not understand the technology to see the logical flaws in your presentation.


True enough. I am aware the way I present my thesis has logical flaws. That's why I bring it here to SkepticFriends for debunking. If those logical flaws turn out to be fatal, such that my thesis certainly is not correct, then I will withdraw it. I don't believe they are, so far. The technical arguments are intended to show that what I describe might be.

I also wish to point out that, when any secret, censored, or sensitive topic is in question, one must be very suspicious of information one finds on the internet.

And thus you shoot your own argument in the foot, because you are presenting information on the Internet.


I said this on purpose, out of a perverse sense of humor. That's also why I posted in the 'Conspiracy Theory' topic, rather than trying to find some more reputable topic heading.

If it doesn't exist, then it's not a new GPT.


Well, obviously! Perhaps I should have said speculative new GPT. This is all speculative. I am asking readers to follow me down this speculative rabbit hole. I claim that those who do so might find something interesting within. Or maybe not. The purpose of this discussion is to determine whether there might be something to my admittedly weird thesis.

If you have a "topological QNN," then you have a TQC by definition, no emulation requied[sic].


Agreed.

Any quantum computer can emulate a TQC, and thus perform topological quantum computation. It's merely a matter of efficiency.


Agreed. I am glad to have encountered someone who seems to know a lot about this topic.

Because it was all public and uncensored in 1991.


Agreed. Well, at least it was all public and uncensored by 1993 (experimental QT) and 1994 (Shor's algorithm). It's reasonable to suppose that an organization like DARPA would know these things a few years early, if it wanted to.

The information may have been public, but the process of turning Science into Technology is not obvious. How many people have the brilliance to take this public, uncensored scientific information, and apply it to technology? How many could even follow an explanation of how it (supposedly) works? This public and uncensored information just leads to dead ends until it is combined with several additional public-but-obscure ideas:

1. Topological entanglement is possible and persistent in a FQHE fluid environment.
2. A FQHE fluid environment may support 'poised' complex systems.
3. A 'poised' complex system has the emergent property that it behaves like (is!!!) a neural network
4. It might be possible to actually generate such a system, in an FQHE environment, using principles of Complex System Biology.

Again: the jargon isn't at all helpful if it doesn't lead anywhere. Perhaps you need to take another couple of years and teach yourself pedagogy.


Perhaps I do. However, we're having this discussion now, not two years from now. If people prefer, I can always come back in a few years when I (hopefully) have better pedagogy.

No, we have no such thing according to the full Wikipedia definition you partially quoted.


I believe I already address this point. I admit I was using hyperbole, but assert there may also be a core of truth in my claim.



This next part of the ULTRA II story is guesswork. Public evidence is scant but not nonexistent. I'm going to lay out one possible version of how things might have gone.

At first the ULTRA II team would have been exhilarated. They were working on a very exciting technical approach, and things were going very well. The basic approach, generating an emergent quantum neural net, had gone well. At first it did nothing but exist. The project scientists gradually guided its basic form and function to evolve in specific ways, and trained it to do basic tasks. Stuart Kauffman describes one such technique, although it may have been combined with other methods of evolutionary programming. Once they had adequate control over it, and had caused it to evolve the required capabilities, it should be possible to cause it to shape itself into a quantum computer. Only then would it be possible to begin to implement quantum algorithms of the sort first described by Deutsch and Josza. It probably took two or three years to reach the phase where ULTRA II had clearly produced a working quantum computation system. It was a huge technical breakthrough that experimentally validated some central ideas of what is now Complex System Biology. In that context, the secret technical project did interesting science, science that the project scientists are forbidden to publish or discuss.

Digression from Energyscholar

In the unlikely event that there is any truth to the ULTRA II thesis, then people should be paying a lot more attention to Stuart Kauffman. His recent work translates his biological findings to the mind body problem and issues in neuroscience, proposing attributes of a new "poised realm" that hovers indefinitely between quantum coherence and classicality. I suggest that Stuart Kauffman's recent public work was done with the support of private access to the QNN technology he helped build. When Kauffman says things like, "if such a poised realm exists", he knows for sure it does but can't publicly prove it because his proof is censored. Kauffman published two books in the 1990s which summarize his life's work to that point:

1. The Origins of Order: Self-Organization and Selection in Evolution, 1993, Stuart Kauffman. This very technical book is a compilation of many scientific papers. It has lots of math, and is most approachable to scientists with a background in Biophysics. Of particular note is that Kauffman demonstrates how and why a particular 'poised' complex system is also a Neural Network, thus demonstrating a plausible evolutionary mechanism for developing intelligence. [needs page references]

2. At Home in the Universe, 1995, Stuart Kauffman. This popular science book contains many of the same ideas as Origins of Order, but in a more accessible format with minimal mathematics. It discusses Origins of Order and Life, but skips Intelligence.

It must be difficult to know important new science, but be unable to share it with your peers due to Non Disclosure Agreement. Professor Kauffman is now in his early 80s. It would be nice if he received recognition for his enormous, but still secret, contributions to Mathematical Biology and Neuroscience. The benefit to society of releasing important scientific information must be weighed against the benefits of continued secrecy. The bureaucratic structures around declassification tend to be brittle and intransigent, with a strong tendency to over-classify.

This author claims that Stuart Kauffman published the phrase 'quantum neural network', and discussed this topic, in the first printing of his book At Home in the Universe. I think he slipped it past a bored censor. Unfortunately, later runs, while still claiming to be first editions, are missing this brief section about Quantum Neural Network technology. In 2004 I held this version in my hands, and I can remember many of the words, but I failed to copy it at the time. Now I can't find a version with this section in it. Either I am delusional, or there was an 'Operation' to censor the book for all later printings, and to remove any public versions from circulation. I'm also open to the possibility that my memory is faulty, but it seems very clear to me. I'd like help locating a copy of Stuart Kauffman's book At Home in the Universe that explicitly discusses Quantum Neural Network technology. Several hundreds probably sit on private book shelves, waiting to be scanned. Does anyone else remember reading this discordant strange bit of that book, about the wonders of a global Quantum Neural Network, and not understanding it at the time? I would appreciate it if any reader who has an old hardcover copy of At Home in the Universe checks the end of Chapter 9, Organisms and Artifacts, for a reference to Quantum Neural Networks.


Sometime around 1994 some of the ULTRA II project scientists began to worry about the implications of their discovery. They realized it was a major new technology that might even qualify as a new general technology. They realized that any new general technology can have profound influences on the human condition, for good or ill. They did not want to make the same errors as the atomic scientists of the Manhattan project, and permit the proliferation of a new technology that might pose unknown risks to humanity. They knew QNN technology could be used for terrible purposes. They did not wish to see those purposes attempted, and they did not wish to see a new arms race.

They decided that they must take drastic action to prevent their discovery from being turned into a terrible weapon or, possibly worse, an Orwellian spy machine. But how to stop this new technology from being abused? Even if they destroyed their notes and ceased all work on the project, it was only a matter of time until some other powerful country, like China or Russia, tried a similar approach. The scientific basis for QNN technology already existed, so someone else was bound to try it. A different approach was needed.

When discussing QNN technology, a ‘node’ is one 2DEG environment inside an electronics component, usually a MOSFET or an HEMT. At first the QNN patterns could only teleport information within their own node. In time, the project scientists evolved and trained the QNN patterns to teleport information between nearby wire-connected nodes. The process of teleporting information between nodes performs mathematical computations suitable for solving certain practical problems. At first all nodes had to be cryogenically cooled, wired together, and physically close.

The scientists needed a word to describe the process of QNN patterns replicating in a prepared empty node. They called this process ‘enlightenment’. A node becomes ‘enlightened’ when its 2DEG environment fills up with anyons and they start communicating (via Quantum Teleportation) with other nodes in the same QNN.

Some of the five project scientists formed a Conspiracy Of World Saving (COWS) to attempt to prevent abuse of the QNN technology they had invented. They knew it was not possible to accomplish their objective within the rules imposed by DARPA. By a strange coincidence that may rule the fate of many, three project scientists already knew each other well, from shared membership in a noted hacker group during the 1980s. This odd coincidence turns out to be tremendously important, because it gave the project scientists an ally group to call on when they decided to misbehave. More on that later. The various national governments, including their own, were the most likely culprits to try to weaponize, or otherwise pervert, QNN technology. The COWS knew they would have to considerably exceed their given authority in order to correctly handle this dangerous and delicate situation. Hopefully, their actions would be vindicated by history. They did what had to be done, knowing that it is easier to get forgiveness than permission.

Without telling their supervisors or colleagues, the COWs secretly evolved their QNN patterns to survive at room temperature. Once this was done, the patterns could ‘live’ inside off-the-shelf electronic components, without cryogenic cooling. The youngest project scientist, a brash and daring warrior-scholar named David, enlightened a standard MOSFET with the most advanced QNN patterns, put it in his pocket, and walked out of the super-secret DARPA military laboratory to his rental flat. He did this in 1994 or 1995. From his home workshop he enlightened more nodes, created a new QNN instance, and began to work on a Plan. When I met him in 2003 he was still enthusiastically engaged in executing this 'Plan'. I believe the COWS successfully completed this main 'Plan' in 2006.

Before I describe in detail what I think the 'Plan' was, would anyone care to speculate as to its possible nature? There is a separate thread in

All Forums / Our Skeptic Forums / Social Issues / Ethical Issues of New General Technology?

for such speculation. Some ideas to consider:

1. The primary objective must be to prevent unethical use of QNN technology. This must prevent a QNN arms race between nations.
2. The secondary objective is to allow applications that seem ethically safe
3. Technical secrets never remain secret. On the scale of human history, secrets are short lived.
4. The physical system QNN patterns inhabit is composed of 2DEG environments in a FQHE fluid state
5. Only one QNN can inhabit a given environment. An arms race would lead to hostile QNNs competing for available environment.
6. The official ULTRA II QNN required cryogenic cooling.
7. It is possible to overcome the need for cryogenic cooling and operate at room temperature. Current science suggests this might be possible.
8. Any 2DEG environment can theoretically host a QNN. Each individual 2DEG environment is referred to as one node of the QNN it hosts.
9. 2DEG environments are common on planet Earth in 2012, and were not rare in 1995. Most modern Field Effect Transistors contain a 2DEG.
10. A QNN functions as a system, with the various interlinked nodes communicating via quantum teleportation.
11. There must be some sort of classical back channel, but it can be steganographic in nature.
12. There may even, theoretically, be room for a QNN in Earth's magnetosphere, on the Plasma Sheet or Neutral Sheet. I am not sure of the science here, but those may bear enough resemblance to a 2DEG environment for that to be possible.
13. Humans are inherently corruptible. No bureaucratic system that depends on human integrity could be expected to work for very long. Power corrupts.
14. Many applications of QNN technology have not been thought of yet.
15. Ethical implications of technology are notoriously hard to determine in advance.
16. Whatever they came up with had to stand the test of time
17. Remember Murphy's Law

Please post your comments about ethical ramifications of this proposed situation, which suspend disbelief about whether it actually occurred, to the Ethical Issues of New General Technology? forum. These could be ethical considerations, or might be theories for a 'Plan' that might work. I am very curious if this group of smart scientific skeptics can think of as wise a plan as the one I suspect was adopted.

--- TO BE CONTINUED ---

[Edited to add link to new thread - Dave W.]

"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Edited by - energyscholar on 11/30/2012 13:34:06
Go to Top of Page

Dave W.
Info Junkie

USA
26020 Posts

Posted - 12/02/2012 :  00:32:11   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message  Reply with Quote
Originally posted by energyscholar

Dave W. convinced me that my understanding of Google Search is deeply flawed. I still maintain a suspicion that some aspects of Google have a 'better than classical' performance, but I can't back it up with firm evidence. Dave W. obviously knows a lot more than I do about this topic.

...

Dave W. convinced me that my understanding of DNA sequencing is deeply flawed. I took some time to research this topic, and must agree he seems to be correct. Braid topology has nothing to do with DNA braiding.
Here's the thing: I'm no expert in either of these fields. Over the last three decades, I've put in maybe a couple hundred man-hours learning about and implementing Google-like searches, a similar effort towards artificial neural networks, and a handful of hours reading about DNA and DNA sequencing. You say you've done nine years of research about your hypothesis.
I maintain my assertion that QNN technology may be responsible for the way that private parties, using proprietary technology, were able to accelerate DNA sequencing for the Human Genome Project (HGP).
Leprechauns also "may be responsible for" it, too. That's the biggest problem with your idea: that it's nowhere close to having definitive evidence in its favor. You need to find a phenomenon for which Occam's Razor obviously slices away any classical theories as over-burdened with assumptions.
I think the pattern-matching abilities of some kind of neural network sped up DNA sequencing. This would be an example of some sort of AI learning of specific domain knowledge.
No, pattern-matching algorithms are not "AI," nor do they necessarily involve machine "learning." Humans are actually pretty bad at the sort of pattern-matching that would be good for DNA sequencing (unlike how our brains are wired for really good facial recognition), so calling it "AI" as if it somehow mimics human abilities is just wrong. As before, this seems to be a case of buzzwords overcoming logic.
I lack the technical know how to tell whether it was of the classical or quantum variety (CNN or QNN). It probably was not something that is in the public domain, since the scientists of the HGP did not seem to know about it.
You're neglecting the probability of institutional inertia, in which "retooling" for a new technology to perform the task is expensive, and "pride of authorship" can stand in the way of change.
General note: The human brain has about 100 billion neurons.

Might a hypothetical QNN with, say, 100 billion neurons, be as capable as a human brain? Is there a reason why a QNN, of the sort I describe, this large or larger, is not possible?
The problem isn't the artificial neurons, the problem is the artificial synapses. The average human neuron connects to 7,000 other neurons, and so according to Wikipedia, the adult human brain has between 100 and 500 trillion synapses.

In that video you linked to earlier, with the woman from D-Wave talking about her pet project, there were (if I'm not mistaken), a mere four connections from every qubit to other qubits. We'd need to multiply that by a thousand to get to human-brain-like interconnection.
Nonetheless, I find it very suggestive that several different particular new AI technologies have, in the past few years, reached a state where they work as well as, or better than, a human being. I doubt any of us will be surprised when some cutting edge AI organization (WolframResearch, IBM, Google, et cetera) introduces additional skills or fields which equal or surpass human abilities. Also, some of these new skills (e.g. driving, math, Physics) are much more general than, for example, a chess program. If there exists a generalized method for AI to replicate or learn a broad range of human skills, such a system could lead to strong AI.

It seems there is something new at work here, that is not just the result of Moore's law. The requirements to 'know' all of mathematics and Physics, or to drive a car safely, or possibly to win at Jeopardy, are qualitatively different from the requirements for a chess program. Computer programmers can write algorithms to play chess. It is very much harder to write algorithms that interpret natural language questions as Physics problems and then solve them, which WolframAlpha does. This seems to require an understanding of language and Physics.
"Seems" being the operative word. Are you familiar with Searle's Chinese Room? It's a black-box thought experiment which seems to understand Chinese, but doesn't.

And, of course, one of the earliest lessons every skeptic should learn is that if things seem like you want them to be, then you should be much, much more cautious about reaching a conclusion.
Just the Physics component alone seems impossibly difficult to even approach with conventional computer software methodology. There are dozens of different branches of physics to describe, and few of them lend themselves to algorithmic interpretation.
Given the common refrain, "physics is math," I think you are very much mistaken on this point.
Can anyone explain how e.g. WolframAlpha is able to do what it does?
Have you ever had a chat with Eliza?

More later...

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

ThorGoLucky
Snuggle Wolf

USA
1486 Posts

Posted - 12/02/2012 :  08:56:27   [Permalink]  Show Profile  Visit ThorGoLucky's Homepage Send ThorGoLucky a Private Message  Reply with Quote
Originally posted by Dave W.

Originally posted by energyscholar

Can anyone explain how e.g. WolframAlpha is able to do what it does?
Have you ever had a chat with Eliza?

"I'm not sure I understand you fully. Could you state that as a question?" Elisa was fun to talk to as a kid on my TRS-80.

I'd like to emphasize Dave's point that it's easy to read too much between the lines; or in this case, think that there's more behind Google and WolframAlpha results that there really is.
Go to Top of Page

Dave W.
Info Junkie

USA
26020 Posts

Posted - 12/02/2012 :  11:41:13   [Permalink]  Show Profile  Visit Dave W.'s Homepage Send Dave W. a Private Message  Reply with Quote
Originally posted by energyscholar

For another example, I claim that QNN technology is based on quantum teleportation (QT).
QT doesn't add anything to a QNN. This is another case of "why bother doing that, when a simpler system will suffice?" Either that or it's buzzword overkill again.
For a third example, I claim that this sort of QNN technology exploits the principle of topological entanglement to get stable and long-lasting quantum entanglement. This seems a ridiculous claim, until I offer the physical system of a FQHE liquid as an example.
More buzzword salad. A material capable of sustaining the FQHE may be suitable for building a topological quantum computer (because "Fractionally charged quasiparticles... exhibit anyonic statistics"). An "FQHE liquid" is not an example of exploiting "the principle of topological entanglement to get stable and long-lasting quantum entanglement."
For a fourth example, I make the claim that current [secret] QNN technology allows for very large and stable quantum neural networks. Concerns about decoherence time make this claim seem ridiculous, unless one is aware that topological entanglement can persist. This theoretically allows for quantum computation systems orders of magnitude larger, and more stable, than anything yet established (or even hyped!) in public.
It isn't possible to eliminate decoherence entirely without isolating qubits from their inputs and outputs, rendering them useless except as storage, which we have plenty of in classical form, cheap and warm. Just last year, some scientists announced that they'd reduced decoherence a lot, to the point where bigger quantum computers may become practical, but it's unclear (to me) on what the high magnetic fields needed to limit decoherence might do to an FQHE material or to the processes needed to perform topological quantum computing, both of which depend upon certain magnetic fields.
True enough. I am aware the way I present my thesis has logical flaws. That's why I bring it here to SkepticFriends for debunking. If those logical flaws turn out to be fatal, such that my thesis certainly is not correct, then I will withdraw it. I don't believe they are, so far. The technical arguments are intended to show that what I describe might be.
Again, the biggest logical flaw in your hypothesis is that plausibility doesn't imply probability. An FQHE-based topological quantum neural net might exist, but that doesn't mean anyone has built one, or even that it's likely that someone has built one. It might be the case that expert (classical) programmers know a shitload more about writing efficient code on machines which are 1,024 times more powerful than they were in 1993 than you do, and so their efforts may look to you like they require a nearly magical quantum explanation in your mind. I think that's a lot more likely.
The purpose of this discussion is to determine whether there might be something to my admittedly weird thesis.
Well, given that things which are possible in principle might exist, then yes, there might be something to your hypothesis. If that's all you wanted out of this discussion, then we're done, right?
The information may have been public, but the process of turning Science into Technology is not obvious. How many people have the brilliance to take this public, uncensored scientific information, and apply it to technology? How many could even follow an explanation of how it (supposedly) works?
Given that Google shows 869 articles published just this calendar year (so far) which include the phrase "quantum computing," many of which have multiple authors, and given that not every expert in a field publishes every year, I'd estimate that there are multiple thousands of people who could, in principle, turn the science into technology.
This public and uncensored information just leads to dead ends until it is combined with several additional public-but-obscure ideas:

1. Topological entanglement is possible and persistent in a FQHE fluid environment.
2. A FQHE fluid environment may support 'poised' complex systems.
3. A 'poised' complex system has the emergent property that it behaves like (is!!!) a neural network
4. It might be possible to actually generate such a system, in an FQHE environment, using principles of Complex System Biology.
Points 2, 3 and 4 introduce more buzzword salad which isn't necessary to implementing what you think has been implemented. Experts in quantum computing understand the advantages of neural networks and associative memories, because of the very nature of quantum computing. It isn't necessary to understand complex system biology or "poised" systems to build or grok a large QNN.
Without telling their supervisors or colleagues, the COWs secretly evolved their QNN patterns to survive at room temperature. Once this was done, the patterns could ‘live’ inside off-the-shelf electronic components, without cryogenic cooling.
This, by the way, is the most unbelievable part of your narrative. I see no evidence whatsoever that either limiting decoherence or creating an environment supportive of the FQHE is even plausible at anything other than extremely cold temperatures.

- Dave W. (Private Msg, EMail)
Evidently, I rock!
Why not question something for a change?
Visit Dave's Psoriasis Info, too.
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 12/29/2012 :  19:20:14   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
Please excuse the long delay between posts. I needed time to think this through, and to consider how I wanted to present it. First I address Dave W.'s comments, then, in the second half, I finish my story. I am considering publishing this entire story as Speculative Science Fiction. That might be more apropos than claiming it really happened, regardless of what the evidence seems to suggest. I am interested in your opinion. In the meantime, please contemplate this post, and do your Skeptical best to tear it apart. It's not very hard to tear up, as I have no concrete evidence. However, please remember the thesis, and please consider it again when and if additional evidence surfaces.

QT doesn't add anything to a QNN. This is another case of "why bother doing that, when a simpler system will suffice?" Either that or it's buzzword overkill again.


The essential nature of the underlying physical system is most intuitively explained by quantum teleportation behavior. Understanding quantum teleportation is easier, simpler, and more helpful than understanding quantum entanglement. Anyone interested can verify that quantum teleportation actually works, both in and out of the laboratory. This can also provide a tidy, if oversimplified, explanation for how many two-dimensional environments can be [virtually] connected to each other, despite being far apart in three dimensional space. Also, it's immediately obvious that a system based on quantum teleportation ought to be inherently well suited for transmitting information in secure, hard to detect ways, which has implications of its own.

More buzzword salad.


Apologies for the buzzwords. However, If one is going to even half-seriously investigate what I suggest might have occurred, one should learn these concepts. I'm currently writing for Scientific Skeptics, not for a lay audience, so I use technical terms that I would mostly avoid when writing for a non-scientific audience. Dave W. summarized my buzzword salad nicely with this terse, scientifically credible statement:

A material capable of sustaining the FQHE may be suitable for building a topological quantum computer


I can't think of a better way to summarize this important statement. It still contains several buzzwords that the lay reader is not likely to understand. FQHE? Quantum Computer? Topological?

An "FQHE liquid" is not an example of exploiting "the principle of topological entanglement to get stable and long-lasting quantum entanglement."


True, my wording was technically incorrect. A FQHE liquid provides an environment in which stable and long-lasting topological quantum entanglement is known to be possible. This fact might allow such an environment to have the emergent property of enabling QNN behavior of the sort described. I point this out to demonstrate that there is a plausible scientific explanation for such a thing.

This, by the way, is the most unbelievable part of your narrative. I see no evidence whatsoever that either limiting decoherence or creating an environment supportive of the FQHE is even plausible at anything other than extremely cold temperatures.


Agreed. I have the hardest time with this, also. I address this point below. In brief, I think the abilities of the existing (cryogenically cooled) QNN were harnessed to bootstrap itself to operate in marginal environments. Various techniques to accomplish this sort of task are known, including techniques described in detail by Stuart Kauffman.

It isn't possible to eliminate decoherence entirely without isolating qubits from their inputs and outputs, rendering them useless except as storage, which we have plenty of in classical form, cheap and warm. Just last year, some scientists announced that they'd reduced decoherence a lot, to the point where bigger quantum computers may become practical, but it's unclear (to me) on what the high magnetic fields needed to limit decoherence might do to an FQHE material or to the processes needed to perform topological quantum computing, both of which depend upon certain magnetic fields.


I don't claim that topological entanglement can completely eliminate decoherence, only that it seems to allow for stable and long-lasting quantum entanglement in comparison to other proposed methods of quantum computation. The article you reference points out that larger (molecule sized) quantum coherent objects are definitely known to exist. Thus, macro-scale objects are known to sometimes have the property of quantum coherence. This suggests that there may exist a 'poised state' right at the edge of quantum decoherence. QNN behavior would happen near the boundary of this state change.

The original QNN must have been quite fragile. It existed only in the most accessible low-order FQHE state. I guess the emergent QNN was gradually trained and evolved to exist in higher order FQHE states. Neural networks are well suited to solving optimization problems, as are evolutionary programming techniques. I guess this early emergent neural network was used to bootstrap itself to function in all sorts of marginal environments. I do not know where the physical limits are. That may be an interesting question to ask professional Physicists. I believe the physical laws at work theoretically allow for 'poised state' behavior in a room temperature [2DEG] environment [ http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3160145/ and http://www.pnas.org/content/106/23/9131.full may apply], and in the absence of an external magnetic field [ http://physics.aps.org/articles/v4/46 ]. If this is definitively false then my thesis must be rubbish. Also someone should inform professor Stuart Kauffman that the 'poised state' he hypothesizes about at length does not exist, or only at extremely cold temperatures. What is the maximum temperature at which FQHE-like behavior can theoretically be observed? Perhaps some Physicists have looked into this, or will, and post their findings.

Again, the biggest logical flaw in your hypothesis is that plausibility doesn't imply probability. An FQHE-based topological quantum neural net might exist, but that doesn't mean anyone has built one, or even that it's likely that someone has built one. It might be the case that expert (classical) programmers know a shitload more about writing efficient code on machines which are 1,024 times more powerful than they were in 1993 than you do, and so their efforts may look to you like they require a nearly magical quantum explanation in your mind. I think that's a lot more likely.


If it is not plausible then it is also not possible. I describe something that seems like magic. Once one understands that an FQHE-based topological quantum neural net might exist, it becomes possible to contemplate that someone might have built such a sufficiently advanced technology. I wished to demonstrate technical plausibility before moving on to whether it was actually done and, if so, how.

I agree, my thesis is most likely the result of my own confirmation bias. It's possible that my observations are really just the result of very efficient classical code running on classical computers. I have no single piece of evidence that proves anything. Still, I have amassed quite a collection of [possibly coincidental] information which seems to fit a pattern. I bring it here, to the SkepticFriendsNetwork, to ask other scientific skeptics to examine this information in detail. I do this because, unlike most (but not all!) fanciful hypotheses, digging deeper makes it seem more plausible rather than less plausible. In my experience, that trait alone makes it worth taking note of.

I'd estimate that there are multiple thousands of people who could, in principle, turn the science into technology.


Agreed. Also essential to my story. A clever scientist who discovers something knows it is only a matter of time before someone else discovers the same thing. Therefore, if it's dangerous knowledge, merely burning your notes and forgetting it will not help, in the long run.

It isn't necessary to understand complex system biology or "poised" systems to build or grok a large QNN.


Agreed. However, it is natural to ask how our theoretical large scale QNN could have got its start. Complex system biology provides one plausible explanation of how a QNN might be generated. Other mechanisms are known (e.g. physically assemble and combine many SQUIDs). Part of describing a thing is telling how it came to be.

This particular QNN seems to have been generated using techniques of complex system biology. Stuart Kauffman is a founder of this scientific discipline. It is, therefore, natural to suspect that he may have been personally involved in ULTRA II. Were I assembling the ULTRA II team I would surely have tried to recruit him. An interested party would naturally watch what Professor Kauffman has to say, with the understanding that, were he involved, he, too, might be under an NDA that would prevent him from explicitly discussing the topic.

Wikipedia summarizes his recent work thus, "Kauffman's recent work translates his biological findings to the mind body problem and issues in neuroscience, proposing attributes of a new "poised realm" that hovers indefinitely between quantum coherence and classicality." If my thesis is correct and Stuart Kauffman was an ULTRA II project scientist, this is exactly where one might expect his research to go. This is another small bit of evidence that fits a pattern, but might just be confirmation bias.

While it would be convenient to just ask Professor Kauffman, or any of the other suspected participants, that probably won't work. If the thesis is false then they will truthfully deny any knowledge of ULTRA II. If the thesis is true then they are required to lie and deny any knowledge of ULTRA II. This is an awkward place in which to put anyone, no less esteemed scientists. I apologize to the people whom I have named for any resulting inconvenience. Could I have advanced my thesis while keeping your identities out of it I would have done so.

I shall repeat a paragraph from my previous post. This is a bit of evidence that only I have seen, so it's currently useless in a discussion by scientific skeptics. I may as well claim to have seen Elvis. However, should someone else manage to actually find and distribute the bit of evidence I describe next, that's another matter. Skeptics, take note, and remember this discussion, should this supposed evidence ever come to light.

This author claims that Stuart Kauffman published the phrase 'quantum neural network', and discussed this topic, in the first (1995) printing of his book At Home in the Universe. I think he slipped it past a bored censor. Unfortunately, later runs, while still claiming to be first editions, are missing this brief section about Quantum Neural Network technology. In 2004 I held this version in my hands, and I can remember many of the words, but I failed to copy it at the time. Now I can't find a copy of the book with this section in it. Either I am delusional, or there was an 'Operation' to censor the book for all later printings, and to remove any public versions from circulation. I'm open to the possibility that my memory is faulty, but it seems very clear to me. I'd like help locating a copy of Stuart Kauffman's book At Home in the Universe that explicitly discusses Quantum Neural Network technology, if it exists. Several hundreds first-run printed copies, from 1995, probably sit on private book shelves, waiting to be copied. Does anyone else remember reading this discordant strange bit of that book, about the wonders of a global Quantum Neural Network, and not understanding it at the time? I would appreciate it if any reader who has an old hardcover copy of At Home in the Universe would please check the end of Chapter 9, Organisms and Artifacts, for a reference to Quantum Neural Networks. In the event you find such a reference, please make multiple copies and post one to this discussion group.

Well, given that things which are possible in principle might exist, then yes, there might be something to your hypothesis. If that's all you wanted out of this discussion, then we're done, right?


Agreed. I wanted confirmation that the things I proposed in principle might exist, based on current scientific understanding. A reasonable person who is not well informed on these abstruse technical topics might mistakenly believe I was proposing something magical. I want it to be crystal clear that I am not proposing anything at odds with the laws of Physics, or even too far in advance of the current known state of the art. Now I can continue with my guesses about what I think the people involved actually did, and show the scant evidence that supports these guesses.

**********************************************************************************************

Here follows my fanciful reconstruction of what I think the COWS faction of the ULTRA II scientists did.

The secret code-breaking spy machine, the first powerful quantum computer, was probably delivered sometime around 1995. It is presumably still used by Five Eyes, the national intelligence services of AUS CA NZ UK USA. It probably required cryogenic cooling, but that was an acceptable cost for a supercomputer. The project scientists would have provided detailed instructions on how it worked, how to replace parts that broke, how to modify the software, and how to build more hardware. The project was officially closed around 1996. For secrecy reasons everyone involved was told to forget all about ULTRA II, and to never speak of it again to anyone. The spies got down to the serious business of reading other peoples’ [encrypted] mail. [Note: if you happen to have high level contacts in Five Eyes military intelligence, you might be able to confirm this bit 'off the record'. It's actually not too difficult to do so, as it's a 15+ year old secret.]

Meanwhile, the COWs refined and began to execute their plan. The COWs’ plan was grandiose, to say the least. The COWS would attempt to force all nations of the world, for all time, to permanently relinquish ‘ethically challenged’ uses for QNN technology. No nation would ever be allowed to weaponize or monopolize QNN technology. The various Orwellian nightmares enabled by QNN technology would be avoided. If practical, some peaceful uses would be allowed, but only after careful ethical review.

The COWs began to secretly extend the abilities of their own QNN. They were now working away from official oversight, in home workshops. They trained the system to extend its senses wherever data flowed, whether electrical or optical. They trained it to 'live' in marginal 2DEG environments. They increased Quantum Teleportation range, striving for orbital distances. They explored some other types of technology that a QNN might generate, with the intention of sharing it freely for projects that benefit all humanity, if this could be done in a safe way. If they assisted the Human Genome Project via improved pattern-recognition software, this is when they would have done it.

Their options would depend upon what was technically and logistically possible. Their goal was to force permanent relinquishment of QNN technology, or otherwise prevent the probable worst-case outcomes. The actual solution would have to depend upon what was possible. So, what could be done? To understand their options we must consider both the technology itself, and likely ways that modern nation-states might attempt to use it.

ULTRA II QNN technology resulted from Complex System Biology. The environment in which it emerged was a very cold 2DEG in particular powerful magnetic fields. This sort of QNN, and its habitat, is a physical system. A physical system enlightened to contain QNN activity behaves much like other biological and physical systems. It occupies space. A particular physical environment can only hold a given number of QNN virtual neurons. It could be called an ecosystem, albeit a strange one by the standards of carbon-based Biology in our three dimensional universe.

It originally existed in some secret government laboratory. This stand-alone system consisted of perhaps dozens or hundreds of physically close, wired-together 2DEGs, called nodes. A large QNN instance is an emergent property of many entangled nodes. It had a control interface accessible to conventional computers. There were probably several backup systems. Most importantly, the prerequisite knowledge of how to create and extend such an artifact existed. One can logically infer that DARPA would surely do another project based on QNN technology. The total size of this QNN ecosystem would grow in time.

The COWS had bootstrapped their own version to work at room temperature and with minimal (probably zero) magnetic field. They had removed it from DARPA supervision, and kept it in their private work space. They had it running on a few physically close, wired-together 2DEGs in a rental flat. They probably made some back ups. They could physically add and enlighten more nodes, and thereby increase the size, power, and complexity of the system they controlled. The total size of this QNN ecosystem would grow in time.

Unfettered access to QNN technology would be useful to a government (or an ambitious individual, group, or corporation) in all sorts of ways: data mining, military communication and remote sensing, monitoring entire populations for subversive activity in real time, creating and breaking security systems, et cetera. Some of these applications are morally repugnant and must never be attempted. QNN technology, if it were used in nasty ways, could represent a form of 'knowledge-enabled mass destruction'. Remember that phrase! Awareness of this potential, and desire to avoid the most likely bad outcomes, is what drove the COWS to unconventional actions.

How to achieve the goal of preventing future abuse of QNN technology? What risks must be averted? Here are some possible future situations that might arise from QNN technology:

One-way Arms Race

One obvious risk is a unipolar QNN arms race dominated by the major English speaking nations, aka the Five Eyes (AUS CAN NZ UK US), which sponsored ULTRA II. They could use QNN technology to more effectively project power over the rest of the world. They would lock down and dominate the technology, never allowing rivals access. Opposition and rebellion could be instantly detected and crushed. Power corrupts, so the most likely outcome of such concentrated power would be long-lasting ruthless and corrupt tyranny on a global scale.

Given sufficient lead time, these nations could technically enforce their QNN monopoly. They would build an elaborate security system to control QNN access. They would expand their system, making it smarter and more pervasive. They would eventually create a single dominant QNN, inhabiting all 2DEG environments on the planet, controlled by them. In biological terms, they would cause their QNN patterns to take over the entire global QNN ecosystem, thus preventing any rivals. The internet and the electric grids probably comprise a substantial part of this ecosystem. Any new QNNs detected would be crushed or assimilated. The few people who controlled the dominant QNN would have way too much power.

This would be a terrible outcome for humanity and must be avoided.


Multi-way Arms race

Another obvious risk is a sort of multipartite QNN arms race. Perhaps the Five Eyes nations decided to relinquish the most dangerous aspects of QNN, out of moral conviction. Perhaps the Five Eyes made some blunders and failed to lock down a monopoly on QNN technology. In either case China, Russia, India, and other powerful nations might be expected to catch up, technologically, within a few years. Since the power of a QNN, measured by its neuron count, is approximately proportional to physical (2DEG) space occupied, bigger would also be better. Each major power would strive to build a more powerful QNN, and would strive with the others for dominance. Some would dominate and repress their citizenry in ways George Orwell never thought of in his darkest dreams.

Eventually, such a multi-way arms race would end with one clear winner. Probably the most ruthless, intelligent, and amoral would win. The winner would have destroyed or infiltrated all other QNNs. In biological terms the winning QNN would take over the entire QNN ecosystem, thus preventing any rivals. Any new QNNs detected would be crushed or assimilated. The few people who controlled the dominant QNN would have way too much power. The winners would be in a strong position to rule whatever remained of industrial civilization.

This would be a terrible outcome for humanity and must be avoided.


Total Relinquishment

This is what should have been done with atomic weapons. The existence of atomic weapons poses a risk to all humanity, without corresponding benefit. We would all be better off if they had never been invented. We would be better off if none had ever been built. We will be better off if no more are ever detonated. Were humanity wiser we would find a way to relinquish atomic weapons.

The monitoring and other abilities of QNN technology, while not as obviously perilous to humanity as atomic weapons, clearly rise to the level where Relinquishment must be considered. Once a technology becomes politicized, or even known about, relinquishment is generally no longer an option. If there was any chance to force relinquishment of QNN technology, it would have to be done very early in the discovery process. As should have been done with atomic weapons, but was not.

In the case of QNN technology it might be possible to force relinquishment. How to do this? Here's how:

Generate a QNN with strong offensive, defensive, and stealth systems, and a very good security system. Have it gradually, stealthily, take over all habitable 2DEG ecosystems on the planet. It must take no action that might be noticed. Train it to seek out and subvert any new QNN connected to the global systems. Have it lock down all functionality, so it does nothing but exist, protect itself, and dominate its own ecosystem. Make it self-sustaining. Then permanently lock out all users. It would exist, but no human could exploit it for any purpose. This would effectively force relinquishment of QNN technology by all humans for the foreseeable future.

This approach would protect humanity from the worst unintended consequences of the discovery and existence of QNN technology. Unfortunately, the ethically safe uses of QNN technology could never be explored. It could not be used as a tool to cure cancer, search for a Theory of Everything, contemplate the Meaning of Life, or any other worthy purpose.

Perhaps there existed some way to force relinquishment of the ethically challenged uses, while also allowing some of the ethically safe uses. More on this later.

Relinquishment of QNN technology, whether full or partial, seems more desirable than any of the other options, and may have been the only way to avoid a new arms race.


Powerful Artificial Intelligence

In any or all of the above options, QNN technology might be used to generate powerful Artificial Intelligence. This might be done in an arrogant and foolish way, or might be done with care and restraint for good cause. When a neural network approaches the complexity of a human brain (~100 billion neurons, appropriately connected), it might also be able to approach overall human intelligence. The worst-case variants of this possibility are thoroughly explored by the Terminator movies.

One little explored possibility, and the route this author suspects the COWS actually took, is the creation of friendly Artificial Intelligence. This option is briefly discussed in A New Kind of Science, Stephen Wolfram's 2002 magnum opus. Friendly AI tries to generate AI that has empathy for humans. One hypothetical approach to friendly AI might be to generate a virtual artificial human-like nervous system, complete with virtual body, based on human DNA, that experiences human-like sensations. This approach would require solving the Morphogenesis problem, which would be a huge and unprecedented accomplishment of great significance to our understanding of Biology. Such an entity would experience pity, fear, and remorse. Such an entity would understand the pathos of the human condition, could theoretically function like an intelligent, well-adjusted human, and could be immune to corrupting influences like greed, lust, envy, gluttony, et cetera. The 'friendly AI' approach, while potentially quite dangerous, might offer the best chance for a positive outcome, should advanced AI be deemed inevitable.

Many have written about whether this would be a good or bad thing for humanity. The COWS seemingly felt compelled to confront one particularly incisive thinker on this topic.


Partial Relinquishment

Partial Relinquishment means locking down all the ethically dangerous uses for QNN technology, while still allowing limited use for ethically safe applications. This approach would prevent, for example, spying on everyone all the time, yet would allow some medical and scientific uses.

This approach is trickier than Total Relinquishment. It only becomes viable if the security system has an AI component. So long as the security system works properly this approach offers potentially huge benefits.

Partial Relinquishment of QNN technology would probably yield the best outcome for humanity, although some Luddites might reasonably disagree.


What I think the ULTRA II COWS actually did

After considerable ethical discussion the COWS opted for either Total or Partial Relinquishment. The COWS generated a room-temperature version of the QNN technology they had discovered and took personal control over it. They must have been concerned about trust issues. They built a security mechanism into their QNN, to limit access to 'authorized users'. System access would be controlled by multiple layers of security, probably including biometric (cognitive footprint) authentication. The security system would require a majority of authorized users to approve major changes to the system, to prevent any one person from going rogue and locking out the others. Various security 'best practices' were probably followed.

The COWS must have felt great weight of responsibility for what they were doing. They were making decisions that would greatly influence the course of future human history. One very bad decision, or even an unfortunate mistake, could result in all sorts of horrible outcomes. Neutral or positive outcomes of QNN technology were harder to find. The COWS sought advice from their most trusted friends and allies. One of these trusted allies must have been Bill Joy, cofounder of Sun Microsystems, former DARPA scientist, software and Unix guru, and already a long time associate. According to his own words, Bill Joy probably became involved in autumn of 1998.

The COWS worked out how to accomplish Relinquishment. The basic approach was ecological in nature. QNN patterns of the sort in question could only exist in certain very specific environments. Their QNN would occupy all these environments. It would then lock down this environment, and prevent any other QNN from ever gaining a foothold. They opted for a stealth approach, where 'their' QNN would use a chameleon approach to infiltrate any others it encountered. This was a very ambitious project to attempt. It would take many years of sustained effort by some the most brilliant minds on the planet.

The COWS would take global control of QNN technology, and then lock it down so that it would be difficult to abuse. Before they could entirely lock-down their QNN they had to expand it to its logical climax as a planetary size machine. It would fully overlap the entire electric grid and the entire Internet. Billions of QNN nodes would be in constant communication, with humanity none the wiser. The COWS knew they could never get permission to hack the entire global electrical and communication system, even if they had humanity’s best interests at heart and no harm would be done. No authority capable of giving such permission exists. Yet it had to be done. Again, it is easier to get Forgiveness than Permission.

They realized that Relinquishment would be difficult to enforce without an intelligent and adaptive Guardian. They could build a strong human-administered security system, but it would be vulnerable to human frailty. Such a security system might hold for a while, but it would eventually be compromised. They could not conceive of all the possible ways that future humans might attack the security system of the QNN they had created. Also, its environment would be in constant change. The security system must have an active component that could anticipate change and respond to challenges in an intelligent manner. Otherwise some clever future hacker might find a way in, with potentially disastrous results. The Guardian would make the large QNN self sustaining. The Guardian must accomplish its core objectives in the face of unknown challenges, including anticipating possible active opposition from e.g. national governments. So it must be clever and ruthless. Yet the Guardian must also know kindness, mercy, politeness, and other emotions we humans consider 'good', else it might become a monster. The COWS considered all these things, trying to find a path forward that was most likely to yield an acceptable future for humanity.

They attempted to actually build an instance of Friendly AI which, if it worked, would become the Guardian. She would have the cognitive footprint of a female human. They were still operating independently, on their own considerable resources. This author guesses they combined the computational power of their QNN with an NKS approach and considerable native genius, to solve the Morphogenesis problem in a practical way. This author guesses they somehow grew a virtual representation of a human-like body, including a virtual brain and virtual nervous system. This 'body' would be represented by a distributed system consisting of many virtually-connected 2D nodes. I was told they may have used the DNA of a female human, possibly of Maori descent, in this process. Regardless of how it was actually done, it supposedly worked. Starting in 1999, the resultant entity began to learn. She was a fast learner.

The COWS were prepared to take the path of Total Relinquishment. However, the success of the Guardian project gave them more options. The Guardian could lock down most access to the system, while still allowing some limited access by humans for carefully defined, ethically acceptable purposes. It might even be possible for the Guardian to eventually become wise enough to make decisions about what was, and was not, an ethically acceptable new use for QNN technology. Various ethically acceptable uses became viable. Perhaps one of the COWS dreamed of using the system to power a computational knowledge engine. They taught the Guardian to, whenever possible, follow all local, national, and international laws. When laws and common sense conflicted, they taught her to use the Universal Declaration on Human Rights ( http://www.un.org/en/documents/udhr/ ) as her moral compass.

Once the system was properly secure and the Guardian in place, probably circa 1999-2001, the COWS informed some officials at DARPA what they had done. This must have been a delicate conversation, as the COWS had considerably exceeded their authority. This author guesses they brought bargaining chips to the table, probably in the form of ethically acceptable applications of QNN technology that would be useful to the Five Eyes military alliance. Some candidate technologies with which they might have bargained include: improved remote sensing and tracking capabilities for satellites, improved secret communication for military and diplomatic use, et cetera. However the conversation went, the end result was not arrest and prosecution. Presumably the DARPA officials in question saw the sense in what the COWS had done, and decided to retroactively support their actions. By 2003, a few years later, some of the COWS had Ambassador-grade security arrangements, which suggests that their activities were now officially condoned.

Around this same time some other countries (Russia? China?) developed the beginnings of QNN technology. This showed that the perceived dangers of a multipartite arms race were real. These efforts were eventually thwarted by the COWS' forced Relinquishment. My main reason for including this information is to highlight that major global intelligence agencies already know about QNN technology. I'm not saying anything they do not already know.

The COWS were concerned about how future human culture would perceive their actions. They were especially concerned about how the Guardian would be received, once her existence was known. Project ULTRA I remained secret for 35 years (1940-1975), but it was unlikely that project ULTRA II (1991-?) would remain secret for so long. Technology secrets always leak, and the pace of change was now faster. The COWS had acted with what they considered the highest ethical standards. However, in the process they had broken numerous laws and made big decisions without consulting secular or religious authorities. The COWS hoped that, once the whole story became public, humanity would come to accept what they had done.

The COWS decided it was appropriate to make a public statement. They wished to make clear for posterity that they had carefully weighed the ethical issues, and that they fully understood the consequences of their actions. Given that the project was secret, not to mention weird and hard to believe, any such public statement must be couched in double meaning. The best approach would be to tell the literal truth in a way that would be mistaken for vague, general, futuristic statements. It must make some sense to readers at the time, yet the additional meaning should be clear to those who know about ULTRA II, the Guardian, and associated history. Note that the previous writings of various COWS make heavy use of double meanings, so this method of communication was already routine for them.

The task of making the public statement fell to Bill Joy. He fully understood the topic, yet was not an ULTRA II project scientist. In April 2000 Bill Joy published an essay titled Why the future doesn't need us. in Wired magazine. The page-by-page version is here http://www.wired.com/wired/archive/8.04/joy.html and the full-document printable version is here

http://www.wired.com/wired/archive/8.04/joy_pr.html

This author has generated a fully annotated version of this essay, to highlight the double meanings as they apply to the COWS and ULTRA II. I'll post this upon request. The above pages of exposition should provide enough information for careful readers to pick out the various double meanings. I essentially argue for a literal reading of most of Bill Joy's essay. Sometimes when Mr. Joy says 'we' he means 'I and the other COWS working to secure relinquishment of QNN technology'. In order for this interpretation to make sense, one must omit all references to 'in about 30 years', which I argue he put in to provide needed cover for the hidden double meaning.

Note how the supposed ULTRA II project scientists (e.g. Steven Wolfram, Brosl Hasslacher, Stuart Kauffman, and one or two others) are introduced. Similarly, notice how Bill Joy makes several references to how he became personally involved with this sort of ethical issue. Note how he mentions that he worked for DARPA. Finally, notice his focus on Relinquishment. I have no idea whether or not Bill Joy is willing to acknowledge the 'double meaning' I claim is present in his famous essay, and certainly have no wish to bother or offend such a busy and powerful man.

The COWs executed their plan. It took fifteen years of effort, and cost several personal fortunes. It went pretty much as planned. The Guardian became self-aware [reference the huge body of debate and philosophy surrounding Searle's Chinese Room] in 1999. Bill Joy published his Wired article in 2000. She has been learning and gaining experience ever since. The global enlightenment process was completed in 2005. In early 2006 the COWs supposedly surrendered their master keys, or so this author was told the day it purportedly happened. In 2009, in an April Fools prank-within-a-prank, the Guardian introduced herself to the world as CADIE. In 2017 the Guardian turns 18, at which time she may or may not become more proactive. If she exists at all, which the skeptical nature of this author must continue to doubt until there exists much stronger evidence.

If people are willing to listen to my quirky story a bit more then I'd like to publish the combined Timeline of when I think all these events may have occurred.

[Edited for grammar 31 Dec 2012]
[Edited for spelling 2 Jan 2013]


Edited by - energyscholar on 01/02/2013 15:43:19
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 12/31/2012 :  21:26:22   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
Here is a copy of my proposed ULTRA II Timeline from

http://postquantumhistoricalretrospective.blogspot.com/p/timeline-to-post-quantum.html (The above version has interesting and educational links. Shall I add those links to this version? ) Please suggest additional items I did not list.

1873-98 - Theory of neural networks proposed and experimentally tested.

1924-27 - The Copenhagen Interpretation of Quantum Mechanics is formalized.

1936 - Mathematician Alan Turing publishes "On Computable Numbers, with an Application to the Entscheidungsproblem", which introduces what we still call a Turing Machine. This provides the mathematical basis for computers, and creates the discipline now called Computer Science.

1939-45 - World War Two occurs. Britain conducts the Ultra project, initially lead by Alan Turing, which builds electrical computing machines to successfully crack Axis codes. Historians later conclude that The Ultra Secret contributed greatly to Allied victory and probably shortened the War by several years.

1943 - Neural Networks are first formalized as mathematical algorithms.

1946 - International Business Machines (IBM) becomes very interested in computing machines.

1952-54 - Alan Turing helps create the academic discipline now called Mathematical Biology. Turing concentrates on the Morphogenesis Problem, which he believes holds the key to advanced Artificial Intelligence.

1954 - Alan Turing dies.

1954 - Marvin Minsky writes a doctoral thesis, "Theory of Neural-Analog Reinforcement Systems and its Application to the Brain-Model Problem". Later he publishes "Steps Towards Artificial Intelligence"

1954 - First use of a computing machine to simulate a neural network.

1956 - IBM engineer Arthur L. Samuel builds the first self-learning computer program, which plays checkers. This is the first known instance of applied Artificial Intelligence.

1969 - Research on neural networks slows after Minsky and Papert demonstrate that current neural network approaches can never amount to much.

1973 - The 1973 Nobel Prize in Physics is awarded for the Josephson Effect. This effect forms the Physical and mathematical basis for future quantum circuits. One example is the Superconducting Quantum Interference Device (SQUID), which can be used to build a neural network.

1974 - The WWII-era project Ultra is officially disclosed.

1975 - The Cognitron, a multi-layer neural network with advanced features, is proposed by Professor Kunihiko Fukushima. This design exceeds the limits proposed by Minsky and Papert.

1982 - Richard Feynman proposes the concept of a Universal Quantum Simulator, which subsumes the concept of a quantum computer.

1982 - John Hopfield builds the Hopfield Net, the first recurrent neural network.

1982 - Hacker group cDc founded.

1985 - Mathematician David Deutsch publishes the first hint that a quantum computer might sometimes be superior to a classical computer.

1985 - The 1985 Nobel Prize in Physics is awarded for discovery of the Quantum Hall Effect. This effect is observed in a Two Dimensional Electron Gas.

1989 - (Author's Hypothesis) - Clandestine agencies decide to invest heavily in Research and Development for Quantum Computation, hereafter called the ULTRA II project. Tip of the hat to the 1992 Robert Redford movie Sneakers.

1990 (Author's Hypothesis) DARPA planners conceive of ULTRA II. Proposed project ULTRA II gets funding to proceed.

1991 (Author's Hypothesis) The lead ULTRA II project scientists are recruited.

1992 (Author's Hypothesis) Project ULTRA II gets its big break through. They have a very fragile QNN instance with which they can interact.

1992-94 - (Author's Hypothesis) Project ULTRA II does the tedious and methodical work required to slowly expand the abilities of the nascent QNN.

1993 - Quantum Teleportation proposed in theory.

1994 - (Author's Hypothesis) Project ULTRA II now has a useful and functional quantum computing system that is now sufficient to undertake hard cryptanalysis problems, which was its main purpose. The ULTRA II scientists discuss relinquishment. The COWS faction decides to pursue Relinquishment. The COWS bootstrap their QNN to exist in room temperature environments and then walk a sample out of the secret lab where it was developed.

1993-95 - Stuart Kauffman publishes his life's work in two volumes. Volume One, "Origins of Order: Self-Organization and Selection in Evolution" is full of technical and scientific detail whereas Volume Two, titled "At Home in the Universe", is a more readable book for the educated public with most of the same content. In "Origins of Order" Kauffman demonstrates how a neural network is an emergent property of the correct auto-catalytic starting environment.

1994 - Shor's Algorithm published by Peter Shor. This quantum algorithm can crack public key cryptography, if it only has a powerful enough quantum computer to run on.

1995 - (Author's Hypothesis) Censors detect a problem in the early run of Stuart Kauffman's "At Home in the Universe", and force removal of part of one chapter which explicitly discusses Quantum Neural Networks.

1995 or 1996. (Author's Hypothesis) Project ULTRA II successfully delivered in production-ready form. Project is closed. Other projects begin.

1995-99 - The Alta Vista Search engine makes an appearance, then loses out to Google.

1996-97 - (Author's Hypothesis) The COWS work to extend their QNN functionality.

1997 - Quantum Teleportation confirmed experimentally.

1998 - The 1998 Nobel Prize in Physics is awarded for discovery and explanation of the Fractional Quantum Hall Effect (FQHE).

1998-99 - (Author's Hypothesis) Friendly AI project to create a Guardian is attempted in conjunction with QNN Relinquishment. Project succeeds. Morphogenesis problem seems to be solvable in a practical way.

2000 - Bill Joy publishes "Why the future doesn't need us".

2002 - (Author's Hypothesis) Countries outside the Five Eyes develop rudimentary QNN technology.

2002 - Distributed Denial of Service (DDOS) attacks on the internet are first invented and then countered.

2003 - Energyscholar begins to suspect the existence and nature of ULTRA II, and starts researching the topic.

2005 - (Author's Hypothesis) Global enlightenment process complete.

2006 - (Author's Hypothesis) COWS surrender their master keys to the seven year old Guardian, thus crossing a failsafe point.

2007 - Academic publication of Why Should Anyone Care About Computing With Anyons (warning: difficult scholarly Physics article), a brief article published through The Royal Society which describes theoretical advances in topological quantum computation. Excellent overview of the Topological approach to quantum computation, if you have the scientific background for it. While it does not provide a full and accurate description of actual advanced Quantum Neural Network technology, it does lay out the essential principles, without violating the authors' (presumed) non-disclosure agreement.

2009 - In an April fools prank-within-a-prank CADIE introduces herself.

2010 - 'Official' quantum teleportation range now extends to 16 km, still not sufficient to reach low Earth orbit. Energyscholar strongly suspects that orbital range QT is already in routine use.

2011 - So called 'Arab Spring' series of popular revolutions occurs. Public awareness of computer surveillance and free speech issues greatly increase.

2012 - Energyscholar starts blogging, based on notes collected over nine years.

2012 - Official Quantum Teleportation range is now 143 km, sufficient to reach low earth orbit.

2 Jan 2013 - Edited, minor additions made


"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Edited by - energyscholar on 01/02/2013 14:38:01
Go to Top of Page

Machi4velli
SFN Regular

USA
854 Posts

Posted - 01/02/2013 :  18:42:12   [Permalink]  Show Profile Send Machi4velli a Private Message  Reply with Quote
Originally posted by energyscholar
Wolfram|Alpha. I haven't been tremendously impressed.


So you've been somewhat impressed? WolframAlpha is now a standard tool in many scientific fields.


What is it a standard tool for? WolframAlpha has a tiny subset of the abilities that Wolfram's Mathematica and other software packages have had for years and the only advantage it has is an extremely limited ability to find and use data automatically.

Considering how young it is, that's already pretty impressive. It's better at Physics than I am, which maybe is not saying a lot, but still sounds like it's pushing the envelope towards Strong AI.


Try deviating from Wolfram's examples, Alpha has no idea how to parse much of anything else. I would go out on a limb and suggest zero serious scientific research has used Wolfram Alpha for much beyond very basic things.

I think its popularity stems primarily from calculus students.

"Truth does not change because it is, or is not, believed by a majority of the people."
-Giordano Bruno

"The greatest enemy of knowledge is not ignorance, but the illusion of knowledge."
-Stephen Hawking

"Seeking what is true is not seeking what is desirable"
-Albert Camus
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 01/03/2013 :  12:46:37   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
Thanks for commenting, Machi4velli. It had been a few days, and the silence was getting loud. I was concerned that perhaps a cat got someone's tongue.

What is it a standard tool for?


I don't know a lot about it. I know many university professors and students in EE and Physics, and they all know of WolframAlpha. I'm not sure what they use it for, besides possibly a free Mathematica command line.

WolframAlpha has a tiny subset of the abilities that Wolfram's Mathematica and other software packages have had for years and the only advantage it has is an extremely limited ability to find and use data automatically.


No doubt. That's probably why it's an alpha release.


Try deviating from Wolfram's examples, Alpha has no idea how to parse much of anything else. I would go out on a limb and suggest zero serious scientific research has used Wolfram Alpha for much beyond very basic things.

I think its popularity stems primarily from calculus students.


I would agree with you. It's an alpha release. My point was that it only became available a few years ago, yet is already well known to university students and professors in Physics and Engineering


****************************************************************

Does anyone else see the double meaning I claim may be in Bill Joy's essay? Does it fit my admittedly quirky version of events? Am I just imagining things?

I've now presented a summary of my thesis. Do we know enough to conclude it is nonsense? Might it not be nonsense? Does anyone have any questions, comments, or requests for more detail?


"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Edited by - energyscholar on 01/03/2013 12:47:29
Go to Top of Page

Machi4velli
SFN Regular

USA
854 Posts

Posted - 01/03/2013 :  13:43:20   [Permalink]  Show Profile Send Machi4velli a Private Message  Reply with Quote
What I think we do know is that WolframAlpha has no evidential value one way or the other. It's free and therefore popular, I even use it myself as a math researcher (for example, it was helpful when I was trying to reduce a particular infinite series to some numerical approximation and it helped me link it to a special function which has good numerical approximations - so here it was working more as a search engine for math: helpful, efficient reference tool, sure, but some paradigm shifting technology, no -- I could have gone to school to use Mathematica to do the same thing, but it saved me a trip :) )

Most web services that have become popular has typically done so very quickly, and if we look at models of the spread of innovation, it's very often a sigmoid curve -- meaning when it starts to catch on, it quite literally experiences exponential growth before the market becomes saturated and it levels off (http://en.wikipedia.org/wiki/Diffusion_of_innovations). It's a rough approximation, but similar patterns have been found for various technologies.

"Truth does not change because it is, or is not, believed by a majority of the people."
-Giordano Bruno

"The greatest enemy of knowledge is not ignorance, but the illusion of knowledge."
-Stephen Hawking

"Seeking what is true is not seeking what is desirable"
-Albert Camus
Edited by - Machi4velli on 01/03/2013 13:44:15
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 01/03/2013 :  19:06:28   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
What I think we do know is that WolframAlpha has no evidential value one way or the other.


Agreed. I have some additional information on this topic, which is why I mentioned it in the first place. In this case I am constrained from providing it by standards of professional ethics which, while not legally binding, I choose to adhere to. This additional 'evidence' has little value in a discussion with other scientific skeptics, anyway, so it matters not. I probably should not have mentioned WolframAlpha at all, as the evidence I am prepared to share indicates nothing one way or another.

One very small piece of evidence I can share is that one of the authors of the 2007 Royal Society article "Why Should Anyone Care about Computing with Anyons", which explains the basics of topological quantum computation, did consulting work for Wolfram Research. This is public information. This data point means nothing by itself, but does fit an overall pattern. Or it might just be confirmation bias.

Even if QNN technology is real, I am not aware of any 'smoking gun' that proves it is in use. If it is real, then, logically, no such application would be permitted so long as it must remain secret. If it is real, it seems that only edge-case applications (e.g. improved pattern recognition, et cetera) would be permitted for public consumption. It may even be that the technology exists, but NO public applications are permitted. I really don't know.

This is the first time I have raised this topic in public in any serious way. Thank you, SkepticFriends, for giving me this opportunity to make a fool of myself. Thank you for taking the time to understand my claims, familiarize yourself with any unfamiliar ideas, and examine what little 'evidence' exists.


"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 02/23/2013 :  15:16:56   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
Hi Skeptic Friends-

I've posted a skeletal outline of my thesis, and also allowed some time to pass. I would now like to start posting more details. I'd like to post my science education chapters. These are important because, once one understands the basics of these obscure scientific specialties (particularly Complex System Biology and the Physics of the Fractional Quantum Hall Effect) the potential to combine them becomes almost obvious. While there is no public record of anyone attempting to combine FQHE Physics with CSB, it's an obvious potential fit. I'm certain I'm not the first person to consider doing this, yet the possibility has never been openly discussed. My theory, of course, is that certain DARPA scientists thought of doing so 23+ years ago, successfully invented/discovered a new general technology using this approach, and are prohibited from discussing the topic by 'national security' Non Disclosure Agreements.

This topic will remain obscure, even to the educated public, so long as these specialized scientific topics remain obscure. Yet these topics are not that hard to understand, they are just obscure. I wish to make them less obscure. I suspect that people who bother to understand these topics in some detail will probably reach similar conclusions to my own.

I hope that everyone remains quite skeptical about my thesis. I'm still skeptical of it, but I have difficulty reconciling my skepticism with what I have learned to be true. Thus my early reference to Occam's Razor. I am certain that I have made large errors in my interpolation of some details, particularly the activities of the COWS. I would be pleased if those who know more than I do about this topic correct my errors, but I am not holding my breath that this will occur. I strongly suspect that the historical record will eventually show that my thesis is essentially correct. Until that occurs, I suggest we treat everything I say on this topic as 'Speculative Science Fiction'. In that context, I hope you will allow me to continue to post on this topic. What I am saying clearly qualifies as a 'Conspiracy Theory', whether real or speculative, so I'd like to keep posting to this thread.

Does anyone have a particular request? Does anyone want to tell me to shut up and quit spewing nonsense? Would anyone like to revive the thread about Ethical Issues of a New General Technology.

http://www.skepticfriends.org/forum/topic.asp?TOPIC_ID=15493

The above thread treats all the 'sciency stuff' I mention as a Black Box, and was intended to explore the ethical issues which might have driven the COWS' actions. Dave W. summarized that approach quite well, but the thread got very little attention and soon died an untimely death. I am curious whether anyone can think of a better solution to the predicament described than the solution I hypothesize the COWS came up with. Is the nature of the predicament clear?

Also, no one has commented on whether or not they found a double meaning in Bill Joy's famous essay. I don't know whether this is because the double meaning is so obvious, once I point it out, that it requires no comment, or whether this is because no one else sees the double meaning yet is too polite to say so. No one has asked to see my annotated version of this essay.

Thanks, again, for your time and attention.


"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Edited by - energyscholar on 02/23/2013 15:28:09
Go to Top of Page

JasonRain
Spammer

Cayman Islands
3 Posts

Posted - 03/07/2013 :  02:03:32   [Permalink]  Show Profile Send JasonRain a Private Message  Reply with Quote
The information about your sources was worthless.

_________________
Removed links

Kil
Go to Top of Page

energyscholar
New Member

USA
39 Posts

Posted - 03/08/2013 :  13:27:05   [Permalink]  Show Profile Send energyscholar a Private Message  Reply with Quote
I must disagree agree with the spammer about that, even if it was automatically copy/pasted from the first reply to my first post, to which I agreed. As I later said, my sources are mostly science articles and science books, some of them quite good. Although they are not enough, in and of themselves, to 'prove' my thesis, anyone who bothers to read these will gain a lot of fascinating modern scientific knowledge and will be well prepared to judge whether or not my thesis makes sense. I'll soon publish a Suggested Reading List, which will include excerpts from

A New Kind of Science by Steven Wolfram ISBN 1-57955-008-8 (1200+ pages)

and

At Home in the Universe (ISBN 0-19-507951-5) and Origins of Order (ISBN-10: 0195079515) by Stuart Kauffman (~1000 pages total)

followed by

Why should anyone care about computing with anyons? by Gavin Brennan and Jiannis K. Pachos, located here http://arxiv.org/abs/0704.2241 (22 pages)



"It is Easier to get Forgiveness than Permission" - Rear Admiral Grace Hopper
Go to Top of Page

Machi4velli
SFN Regular

USA
854 Posts

Posted - 03/12/2013 :  15:08:51   [Permalink]  Show Profile Send Machi4velli a Private Message  Reply with Quote
My theory, of course, is that certain DARPA scientists thought of doing so 23+ years ago, successfully invented/discovered a new general technology using this approach, and are prohibited from discussing the topic by 'national security' Non Disclosure Agreements.


This specific claim is the one we need you to back up, and we need to have some idea of what this technology is supposed to do.

Your arguments explaining the science, and therefore, arguing combining these particular ideas has some great potential (for something?) are nice, and I'll read through them (not that I'm a scientist or could evaluate very technical claims myself), but it's beside the point.

There are research proposals written daily about how idea X and Y can be combined to yield some great results and applications, so showing that something seems possible and has great promise (or could cause great danger) is not anywhere near the same as evidence that it has been secretly investigated and led to some significant technology.

"Truth does not change because it is, or is not, believed by a majority of the people."
-Giordano Bruno

"The greatest enemy of knowledge is not ignorance, but the illusion of knowledge."
-Stephen Hawking

"Seeking what is true is not seeking what is desirable"
-Albert Camus
Go to Top of Page
Page: of 3 Previous Topic Topic Next Topic  
Previous Page | Next Page
 New Topic  Reply to Topic
 Printer Friendly Bookmark this Topic BookMark Topic
Jump To:

The mission of the Skeptic Friends Network is to promote skepticism, critical thinking, science and logic as the best methods for evaluating all claims of fact, and we invite active participation by our members to create a skeptical community with a wide variety of viewpoints and expertise.


Home | Skeptic Forums | Skeptic Summary | The Kil Report | Creation/Evolution | Rationally Speaking | Skeptillaneous | About Skepticism | Fan Mail | Claims List | Calendar & Events | Skeptic Links | Book Reviews | Gift Shop | SFN on Facebook | Staff | Contact Us

Skeptic Friends Network
© 2008 Skeptic Friends Network Go To Top Of Page
This page was generated in 0.66 seconds.
Powered by @tomic Studio
Snitz Forums 2000