Artificial Intelligence: When Humans Transcend Biology
Moderator: Available
- -Metablade-
- Posts: 1195
- Joined: Fri Nov 04, 2005 4:54 pm
Artificial Intelligence: When Humans Transcend Biology
Ray Kurzweil: Robots won't be convincing as human equivalents until computers can pass the Turing test which I've consistently pegged at 2029. At that point we will have completed the reverse engineering of the several hundred regions that comprise the human brain And we'll have the hardware to implemented these principles of operation of human intelligence.
http://tinyurl.com/ozg8f
http://tinyurl.com/ozg8f
There's a bit of Metablade in all of us.
The "Turing test", for those who may have missed it, is a hypothetical blind conversation, by text (!) with an entity which may be computer, or human.
If a human cannot distinguish whether he/she is conversing (via text) with a human or a computer, the hidden respondent passes the "Turing test", and, if it is artificial, it is deemed to be the equivalent of human.
This moronic and short-sighted "test" is a product of tube-technology thinking: the same kind of brainalyzing that gave us wing-flappers to emulate flight, and caused the President of IBM to declare that the world should never need "more than three computers."
Get real.
We've invented these primitive machines called "computers" which we've convinced ourselves work "just like the brain" when in fact we have little clue how the brain actually DOES work.
We've created an artificial EMULATION of our own thought process and declared it equivalent...it is not.
The Turing Test is Readers' Digest material.
This crap, along with the likes of "Scientific American" magazine, is poisoning real Science.
While purporting to be "visionary" and "definitive", it is in fact the product of limited contemporary speculation based on the popular pop-science flavor of the day.
The sooner we lose it, the better: some day we'll look back and laugh.
NM
If a human cannot distinguish whether he/she is conversing (via text) with a human or a computer, the hidden respondent passes the "Turing test", and, if it is artificial, it is deemed to be the equivalent of human.
This moronic and short-sighted "test" is a product of tube-technology thinking: the same kind of brainalyzing that gave us wing-flappers to emulate flight, and caused the President of IBM to declare that the world should never need "more than three computers."
Get real.
We've invented these primitive machines called "computers" which we've convinced ourselves work "just like the brain" when in fact we have little clue how the brain actually DOES work.
We've created an artificial EMULATION of our own thought process and declared it equivalent...it is not.
The Turing Test is Readers' Digest material.
This crap, along with the likes of "Scientific American" magazine, is poisoning real Science.
While purporting to be "visionary" and "definitive", it is in fact the product of limited contemporary speculation based on the popular pop-science flavor of the day.
The sooner we lose it, the better: some day we'll look back and laugh.
NM
The music spoke to me. I felt compelled to answer.
- JimHawkins
- Posts: 2101
- Joined: Sun Nov 07, 2004 12:21 am
- Location: NYC
Not quite..2Green wrote: If a human cannot distinguish whether he/she is conversing (via text) with a human or a computer, the hidden respondent passes the "Turing test", and, if it is artificial, it is deemed to be the equivalent of human.
That's a rather large difference from being "the equivalent of human."
The Turing test is a proposal for a test of a machine's capability to perform human-like conversation.
Don't think any bot has passed the Turing test and if not then I would say the test is holding its own 56 plus years later..Described by Alan Turing in the 1950 paper "Computing machinery and intelligence", it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel such as a teletype machine as Turing suggested or, more recently IRC or instant messaging.
It's a tough test to pass..
I have seen better, and there are many online if you want to look around, but this one is supposed to be pretty good.
You can talk to Alice the chat bot here:
http://www.pandorabots.com/pandora/talk ... d97e345aa1
=========================================
The real question IMO is of computer generated systems can become intelligent and of course you have to define what that means.. Intelligent systems have already showed their stuff, IBM's Deep Blue having beat the world chess champion many years ago..
IMO it is only a matter of time through the use of modeling powerful biological systems like those found in neurology and genetic evolution, before systems designed to do so will become intelligent and sentient..
The real question that dictates if such a thing will happen is if it is possible for a machine or electronic system to become sentient--if it is possible then it will happen IMO. If and when this happens then we may well see an entirely new form of intelligent life emerge, and the first created by humans.

And if so we may find ourselves in competition with it..

I am not for putting certain kinds of chips in our brains; That scares the hell out of me.. But I think this is something many will begin to advocate for seemingly innocuous reasons..
But I see it potentially going in very scary directions when other *forces* are factored in to the equation..

Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
- Bill Glasheen
- Posts: 17299
- Joined: Thu Mar 11, 1999 6:01 am
- Location: Richmond, VA --- Louisville, KY
Human intelligence can at times be an oxymoron. I'm not sure why we would want to emulate the "intelligence" of Joe Sixpack.
As a designer of technology, I understand the principle of enhancement rather than replacement. We design algorithms so that underwriters can price business more intelligently, and NOT to replace the underwriter outright. Even with my best algorithms, I leave a spot for human intervention and interaction.
I once heard a theory by a biomedical engineering professor (in my UVa BME department) where he considered technology as part of our Darwinian evolution. It's an intriguing theory when you think about it. DNA is nothing more than 4-bit information. Why not consider everything else we inheret from generation to generation?
Why would we dumb a new generation of computer down to human limitations? As I see it, we steal what we can from human brain function, and use it as we see fit in a way that enhances what we bring to the table. Yes, some jobs will go away. But was it a bad idea for the cotton gin to make human slavery irrelevant in The South? I think not. If we play our cards right, we use all of this to make our lives better.
And because we are human, we will continue to mess up in our quest.
- Bill
As a designer of technology, I understand the principle of enhancement rather than replacement. We design algorithms so that underwriters can price business more intelligently, and NOT to replace the underwriter outright. Even with my best algorithms, I leave a spot for human intervention and interaction.
I once heard a theory by a biomedical engineering professor (in my UVa BME department) where he considered technology as part of our Darwinian evolution. It's an intriguing theory when you think about it. DNA is nothing more than 4-bit information. Why not consider everything else we inheret from generation to generation?
Why would we dumb a new generation of computer down to human limitations? As I see it, we steal what we can from human brain function, and use it as we see fit in a way that enhances what we bring to the table. Yes, some jobs will go away. But was it a bad idea for the cotton gin to make human slavery irrelevant in The South? I think not. If we play our cards right, we use all of this to make our lives better.
And because we are human, we will continue to mess up in our quest.

- Bill
- JimHawkins
- Posts: 2101
- Joined: Sun Nov 07, 2004 12:21 am
- Location: NYC
Biological models like Generic Algorithms and Neural Nets model human or life processes to get the job done.. But the question isn't if one can create "human intelligence" rather if one can create a *machine intelligence* that may be used to augment our own human ability, or in particular a machine intelligence that is sentient. The implications are staggering.
Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
Can "someday" computers become "intellegent?"
Who knows, there were probabaly few bets on some pint-sized furry ape that walked on its hind legs becoming the dominate species on the planet either.
But in terms of "intellegence" I think we get a bit sidetracked by the words.
Look at Deep Blue and its iterations.
A whole squad of really brillant programers, a raft of technitions and some top flight chess masters all get togather and cobble togather a machine that under a specifc set of rules can "sometimes" play the best chess player in the world to a draw--even beat them.
But what else can it do?
All that effort and money and time and people and they get the "rain man" of chess.
Things that you and I take for granted are, in the forseeable future, utterly beyond even the most sophistacted computers.
What will 200 years bring?
Who knows, wish I could be there to find out
Who knows, there were probabaly few bets on some pint-sized furry ape that walked on its hind legs becoming the dominate species on the planet either.
But in terms of "intellegence" I think we get a bit sidetracked by the words.
Look at Deep Blue and its iterations.
A whole squad of really brillant programers, a raft of technitions and some top flight chess masters all get togather and cobble togather a machine that under a specifc set of rules can "sometimes" play the best chess player in the world to a draw--even beat them.
But what else can it do?
All that effort and money and time and people and they get the "rain man" of chess.
Things that you and I take for granted are, in the forseeable future, utterly beyond even the most sophistacted computers.
What will 200 years bring?
Who knows, wish I could be there to find out

cxt, someday you MUST publish a dictionary for all the fascinating new words you've introduced.
But your point is well taken.
All computers are programmed by man, and execute their lookups at electronic speed, which man cannot do.
Therefore they are a TOOL.
We, (man) can add 2 + 2 perfectly well, but not 1000 times a second.
Likewise, I can grip a bolt, but not as firmly as pliers.
My point is that computers, which we have created, are merely extending our human capabilities, particularly in the temporal (time) arena: they are performing our own computations, at our command, but simply faster.
The concept that computers may someday be capable of independant thought is an extrapolation of the powers of SPEED and SIZE.
This extrapolation proposes that an artificial "neural net" of sufficient speed and especially size, will become self aware and able to somehow gain independant decision and action.
It's an old idea, even Heinlein proposed it.
However there is no evidence that increasing the speed or size of a memory device leads to any independant action, and such programs as "Deep Blue" are simply high-speed lookup/comparison tables which render the most logical result BASED ON PROGRAMMING -- BY HUMANS.
These programs are not thinking.
They are replicating human-like behaviour by an entirely artificial means.
Real chess players do not play like Deep Blue.They plan, devise strategy, think sideways, propose challenges, even bring psychological pressure to the game.
The computer simply cross-matches all known moves (input by humans) and renders the most appropriate based on its programming.
Imagination and conception in the visual realm are the unique gifts of organic beings, and conciousness, particularly SELF consciousness, is the highest expression of this.
Machines can emulate the behaviour arising from these attributes, but they do not possess the attributes.
As long as we are telling them what to do (programming them) they will never possess it, they will imply emulate it.
This is the enigma of the Turing question.
At what point does one concede that a machine has fooled a human?
I myself have been deceived by high-quality answering machines, thinking the person themself answered.
However, one question quickly resolved the reality.
At what point do you concede that a machine has achieved the replication of human response?
One question? Twenty? A thousand?
What if one more question revealed the unraveling logic that exposed the machine?
Much reading and thought is required to debate this question in a meaningful way -- there is much already out there if you're REALLY interested.
NM
But your point is well taken.
All computers are programmed by man, and execute their lookups at electronic speed, which man cannot do.
Therefore they are a TOOL.
We, (man) can add 2 + 2 perfectly well, but not 1000 times a second.
Likewise, I can grip a bolt, but not as firmly as pliers.
My point is that computers, which we have created, are merely extending our human capabilities, particularly in the temporal (time) arena: they are performing our own computations, at our command, but simply faster.
The concept that computers may someday be capable of independant thought is an extrapolation of the powers of SPEED and SIZE.
This extrapolation proposes that an artificial "neural net" of sufficient speed and especially size, will become self aware and able to somehow gain independant decision and action.
It's an old idea, even Heinlein proposed it.
However there is no evidence that increasing the speed or size of a memory device leads to any independant action, and such programs as "Deep Blue" are simply high-speed lookup/comparison tables which render the most logical result BASED ON PROGRAMMING -- BY HUMANS.
These programs are not thinking.
They are replicating human-like behaviour by an entirely artificial means.
Real chess players do not play like Deep Blue.They plan, devise strategy, think sideways, propose challenges, even bring psychological pressure to the game.
The computer simply cross-matches all known moves (input by humans) and renders the most appropriate based on its programming.
Imagination and conception in the visual realm are the unique gifts of organic beings, and conciousness, particularly SELF consciousness, is the highest expression of this.
Machines can emulate the behaviour arising from these attributes, but they do not possess the attributes.
As long as we are telling them what to do (programming them) they will never possess it, they will imply emulate it.
This is the enigma of the Turing question.
At what point does one concede that a machine has fooled a human?
I myself have been deceived by high-quality answering machines, thinking the person themself answered.
However, one question quickly resolved the reality.
At what point do you concede that a machine has achieved the replication of human response?
One question? Twenty? A thousand?
What if one more question revealed the unraveling logic that exposed the machine?
Much reading and thought is required to debate this question in a meaningful way -- there is much already out there if you're REALLY interested.
NM
The music spoke to me. I felt compelled to answer.
- JimHawkins
- Posts: 2101
- Joined: Sun Nov 07, 2004 12:21 am
- Location: NYC
This is an area of particular interest to me. I have been working with and reading about intelligent systems since around 1985, and I am working on an AI app now...
Recently new parallel processing computers have been made that very closely model how neurons work as well as how neurotransmitters work in the hopes of using more practical and realistic hardware to emulate brain functions.
Digital computers ARE Turing machines, by definition. That means that they can be programmed to essentially do anything.. They are not limited to being one or even a hundred kinds of tools because they can be programmed to be almost any kind of tool.. From adding 2 + 2 to assembling your automobile at the plant, to generating original works of "art"..
A Artificial Neural Net for example designed to detect different shapes and objects from aerial photos could be made as large as the planet and it still would do the same thing--detect shapes in photos.. In fact it would be "over sized" for this task and might have trouble being tuned.
Simply adding neurons to a net means nothing without context. The key is what the system is designed to do or designed to become. Designing a simple pattern recognition system is a fairly simple thing and it alone cannot turn into the Terminator..
However, when you allow a system to EVOLVE into whatever it wants to and you design the prime objective of the system's to simply grow, learn, survive and adapt then you are adding the elements that, based on modern evolutionary theory, can give rise to independent thought and finally sentience. And this is what the big boys are working on.
Humans are a mass of different neural net functions, we have nets for driving the car, for chopping down the tree, for doing Sanchin, for doing math, for doing many different things and many of these kinds of sub-systems can already be emulated by neural systems.
The key is to generate a master agent that is comparable to what we think of as our "thinking" and imaginative mind, I think it is the pre frontal cortex. This is where the master agent takes over and directs the use of all the simple sub systems to adapt to the environment, to function independantly in it's evironment..
So yes when you program a system to perform a simple task, yes that is all it can do.. Things get interesting when instead of designing a closed system that only performs a fixed set of tasks, you design a system that does whatever it wants and has a general directive like "learn" or "survive", this is an open ended design that allows for the machine to evolve and in a real sense design itself.
As I said, IMO the only question is if electronic system--when modeled correctly can become sentient, as we understand it--in other words if no "soul" is needed, then yes eventually a sentient system will emerge--simply because it can--just like us..
Actually the computational power of the human brain is many, many, many, times that of even the fastest conventional computer. This is because the human brain uses something called parallel processing, where each neuron is in a sense a very small computer that all work together to process much more than could a very fast digital computer that must process commands one after the other with minimal parallel processing..2Green wrote: All computers are programmed by man, and execute their lookups at electronic speed, which man cannot do.
Therefore they are a TOOL.
We, (man) can add 2 + 2 perfectly well, but not 1000 times a second.
Recently new parallel processing computers have been made that very closely model how neurons work as well as how neurotransmitters work in the hopes of using more practical and realistic hardware to emulate brain functions.
Comparing digital computers with pliers IMO is a bit of a stretch..2Green wrote: Likewise, I can grip a bolt, but not as firmly as pliers.
My point is that computers, which we have created, are
merely extending our human capabilities
Digital computers ARE Turing machines, by definition. That means that they can be programmed to essentially do anything.. They are not limited to being one or even a hundred kinds of tools because they can be programmed to be almost any kind of tool.. From adding 2 + 2 to assembling your automobile at the plant, to generating original works of "art"..
Not really.2Green wrote: The concept that computers may someday be capable of independent thought is an extrapolation of the powers of SPEED and SIZE.
This extrapolation proposes that an artificial "neural net" of sufficient speed and especially size, will become self aware and able to somehow gain independent decision and action.
A Artificial Neural Net for example designed to detect different shapes and objects from aerial photos could be made as large as the planet and it still would do the same thing--detect shapes in photos.. In fact it would be "over sized" for this task and might have trouble being tuned.
Simply adding neurons to a net means nothing without context. The key is what the system is designed to do or designed to become. Designing a simple pattern recognition system is a fairly simple thing and it alone cannot turn into the Terminator..
However, when you allow a system to EVOLVE into whatever it wants to and you design the prime objective of the system's to simply grow, learn, survive and adapt then you are adding the elements that, based on modern evolutionary theory, can give rise to independent thought and finally sentience. And this is what the big boys are working on.
Humans are a mass of different neural net functions, we have nets for driving the car, for chopping down the tree, for doing Sanchin, for doing math, for doing many different things and many of these kinds of sub-systems can already be emulated by neural systems.
The key is to generate a master agent that is comparable to what we think of as our "thinking" and imaginative mind, I think it is the pre frontal cortex. This is where the master agent takes over and directs the use of all the simple sub systems to adapt to the environment, to function independantly in it's evironment..
So yes when you program a system to perform a simple task, yes that is all it can do.. Things get interesting when instead of designing a closed system that only performs a fixed set of tasks, you design a system that does whatever it wants and has a general directive like "learn" or "survive", this is an open ended design that allows for the machine to evolve and in a real sense design itself.
As I said, IMO the only question is if electronic system--when modeled correctly can become sentient, as we understand it--in other words if no "soul" is needed, then yes eventually a sentient system will emerge--simply because it can--just like us..
Last edited by JimHawkins on Thu Jun 22, 2006 5:24 am, edited 1 time in total.
Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
- JimHawkins
- Posts: 2101
- Joined: Sun Nov 07, 2004 12:21 am
- Location: NYC
Stryke wrote:to generating original works of "art"..
talk about a philisophical can of worms there ....
thats a huge claim .
Much bigger than the Turing test .
Well it's a matter of perspective..
Several years ago an artist and AI computer programmer designed an application that would generate original art work based on the concepts of design that this artist favored and used when he created a painting.. I don't know what became of the project but according to the creator it was capable of creating original art work in the style of the creator..
Not too hard actually compared to the Turning test IMO because to truly pass the Turning test the system needs to be self aware.. or damn close to it.. And that is the "golden elephant."
Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
- JimHawkins
- Posts: 2101
- Joined: Sun Nov 07, 2004 12:21 am
- Location: NYC
Some folks at MIT and others I would guess have actually focused on these "smaller" examples of very limited intelligence in the hopes of having more success with their designs.. Supposedly the insect folks are having some luck. One lab and designer used a very sophisticated system with a very large parallel processing neural computer to model a cat.. He was NOT successful..AAAhmed46 wrote:What about "animal'' A.I. where would that fit in? You know, nano-bots or computer programs that have the intellegence and 'instinctive' behavior of an ant or a lizard?
The problem seems to be related to the, nature of the net(s) structure as well as a severe limit on how many neurons we can emulate in a given system—we still don’t know much about how cats, or how we, think. But we have some clues and we are testing them.
A cat also may have too many neurons, to emulate, but I would have to check the numbers.. Anyway as one person said to me.. If you can truly emulate a cat's intelligence then a human’s isn't far off

But yes Adam... The idea of starting small with simple animals in one that is in progress right now.. !
Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
JimHawkins wrote:Some folks at MIT and others I would guess have actually focused on these "smaller" examples of very limited intelligence in the hopes of having more success with their designs.. Supposedly the insect folks are having some luck. One lab and designer used a very sophisticated system with a very large parallel processing neural computer to model a cat.. He was NOT successful..AAAhmed46 wrote:What about "animal'' A.I. where would that fit in? You know, nano-bots or computer programs that have the intellegence and 'instinctive' behavior of an ant or a lizard?
The problem seems to be related to the, nature of the net(s) structure as well as a severe limit on how many neurons we can emulate in a given system—we still don’t know much about how cats, or how we, think. But we have some clues and we are testing them.
A cat also may have too many neurons, to emulate, but I would have to check the numbers.. Anyway as one person said to me.. If you can truly emulate a cat's intelligence then a human’s isn't far offand both still seem to be quite far off..
But yes Adam... The idea of starting small with simple animals in one that is in progress right now.. !
Awsome.
Art is inspiration , quite simply can a machine be inspired and have an emotional response/revelation , art transcends the divine and is the only evidence I have ever found of a greater force .Well it's a matter of perspective..
Several years ago an artist and AI computer programmer designed an application that would generate original art work based on the concepts of design that this artist favored and used when he created a painting.. I don't know what became of the project but according to the creator it was capable of creating original art work in the style of the creator..
Not too hard actually compared to the Turning test IMO because to truly pass the Turning test the system needs to be self aware.. or damn close to it.. And that is the "golden elephant."
how is that creative or art ?capable of creating original art work in the style of the creator..
yes most definately a matter of perspective .
Cognito ergo sum , not mimicry actual comprehension .