Artificial Intelligence: When Humans Transcend Biology

Bill's forum was the first! All subjects are welcome. Participation by all encouraged.

Moderator: Available

cxt
Posts: 1230
Joined: Wed Sep 10, 2003 5:29 pm

Post by cxt »

2Green

You have my most sincere "I'm sorry" for the poor quality of my spelling.

Just a verbal guy doing my best in textual format.

Sorry that I will not be publishing a "dictionary" anytime soon.
User avatar
-Metablade-
Posts: 1195
Joined: Fri Nov 04, 2005 4:54 pm

This is where we are aesthetically with robots

Post by -Metablade- »

http://tinyurl.com/o5ug6

Absolutely amazing.
Imagine where we will be 20 years from now?
Get ready I say.
You may have a robot spoon feeding you when we are 100.
How about an Uechi bot?


:lol:
There's a bit of Metablade in all of us.
2Green
Posts: 1503
Joined: Thu Sep 23, 1999 6:01 am
Location: on the path.

Post by 2Green »

A couple of things that have caught my eye lately:

The first was a project I think here in Canada where researchers grew some biological cells onto a silicon chip and got stimulation reaction in both directions.
I thought that was significant.

The other is the "Millipede" project by IBM: it's a nano-mechanical memory chip. Very interesting because it's full-circle: back to electro-mechanical memory.
I found myself wondering how resistant it might be to EMP.
They say they can make it quite a bit smaller than the first productions.

IBM has also announced the development of a chip 250 times faster than current production chips.
They don't say if it's the Millipede.

More digital prosthetics! We live in interesting times.

NM








NM
The music spoke to me. I felt compelled to answer.
2Green
Posts: 1503
Joined: Thu Sep 23, 1999 6:01 am
Location: on the path.

Post by 2Green »

Here's a taste of IBM's current thinking:

" Laws tend to be broken from time to time, but Moore's Law is really taking a beating.

It refers, you'll recall, to the ability of engineers to make continual refinements in the manufacturing and design of computer chips, so that speed and capacity doubles every eighteen months or so.

Now IBM researchers have effectively doubled the memory capacity of computers - in one fell swoop. They have designed a new chip that manages the way data is stored in a computer's "fast" or random-access memory, doubling the amount of data it can hold without sacrificing speed. This improvement comes on top of whatever advances new chip-making processes may bring.

The Memory eXpansion (MXT) chip will be used first in Intel-based servers such as IBM's Netfinity line, but eventually will find its way to personal computers and pervasive e-business devices.

Used in conjunction with conventional memory chips, the MXT chip makes sure that frequently used data and instructions are stored close to a computer's microprocessor, so they can be accessed quickly. Less frequently used data and instructions are then compressed, so they take up less space, and stored in the remaining memory.

The chip incorporates new compression algorithms which provide a parallel speedup over previous techniques. Contents of memory are stored and accessed using new data structures which waste little space and require no periodic reorganization.

Since memory typically accounts for 40 to 70 percent of the cost of a computer system, IBM's MXT technology will be able to save Internet service providers and others who use high-performance machines thousands or even millions of dollars. Customers can cut costs by purchasing half the memory to achieve the same performance, or they can increase performance by installing the same amount of memory to achieve twice the capacity.

"Adding memory is often the most effective way to improve system performance, but it's a costly proposition," said Mark Dean, IBM Fellow and Vice President of Systems Research. "IBM Memory eXpansion Technology is a game-changing development that improves system performance without adding costly physical memory."

A typical Windows 2000 or NT-server based rack-mounted computer system configuration can achieve its maximum memory capacity of 168 gigabytes with only 84 gigabytes installed. With the retail cost of server memory at several thousand dollars per gigabyte, a customer could double their memory capacity and cut their cost per gigabyte by half, saving about $250,000 per rack of servers. For a customer with a large IT installation -- such as an ISP with multiple racks of servers -- MXT could result in total savings of more than a million dollars.

IBM is exploring ways to incorporate MXT in its line of data-transaction and web-application servers, in addition to storage subsystems and other appliance servers. In the future, the technology could be adapted for desktop and laptop PCs, workstations and pervasive e-business devices, such as handheld computers, mobile phones and anywhere additional memory is needed to allow more information to be stored on smaller and smaller devices.

In a five-year technology sharing agreement with IBM, ServerWorks Corp. of Santa Clara, California, plans to incorporate MXT technology into its next-generation high-end core logic solutions. ServerWorks, a supplier of high-performance core logic for Intel-based servers, anticipates that it will first offer MXT in a product known by the code name "Pinnacle." The company has the right to sell products incorporating MXT technology to all its customers.

"Memory eXpansion Technology reduces hardware cost and boosts performance," noted Raju Vegesna, ServerWorks' president and CEO. "Designers of 1U and 2U rack-dense servers never have enough real estate for large memory configurations, so doubling the effectiveness of each byte of physical memory offers real advantages. Our ability to integrate IBM's advanced technology into industry-standard platforms makes Intel-based servers work better, and benefits everyone who uses, buys or sells systems like these."

MXT technology is only the latest instance of IBM breaking Moore's Law. IBM recently announced new technology that increases the capacity of hard disk drives, as well as silicon-on-insulator and copper technology to increase the performance of semiconductors."

...NM
The music spoke to me. I felt compelled to answer.
2Green
Posts: 1503
Joined: Thu Sep 23, 1999 6:01 am
Location: on the path.

Post by 2Green »

CXT:

Good God man, don't apologize to me!
I'm a high school dropout, at Grade 9!

NM
The music spoke to me. I felt compelled to answer.
User avatar
JimHawkins
Posts: 2101
Joined: Sun Nov 07, 2004 12:21 am
Location: NYC

Post by JimHawkins »

2Green wrote: Used in conjunction with conventional memory chips, the MXT chip makes sure that frequently used data and instructions are stored close to a computer's microprocessor, so they can be accessed quickly.
Well this has always been the case with modern chips, keeping often used data close to the CPU, called Level I and Level II cache memory.. Don't know how this may differ other than perhaps speed and size.
2Green wrote: Less frequently used data and instructions are then compressed, so they take up less space, and stored in the remaining memory.
This may well be the case but I am a little skeptical that binary data that is often already compressed will be able to be compressed much if any more than it already is.. :? Also any compression would use CPU power and time to compress and uncompress so I am unclear how all this works to increase performance <not saying it doesn't> it just sounds strange to me since normally not compressing data is going to be faster which would speed up CPU access to the data.
Shaolin
M Y V T K F
"Receive what comes, stay with what goes, upon loss of contact attack the line" – The Kuen Kuit
User avatar
-Metablade-
Posts: 1195
Joined: Fri Nov 04, 2005 4:54 pm

Post by -Metablade- »

Ahmdal's Law
(On Duel processors)

Sadly enough, there is a limit to the performance gain we can expect from our parallelization. This limit was described by Gene Amdahl in 1967. Here's the exact quote:

For over a decade prophets have voiced the contention that the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit co-operative solution...The nature of this overhead (in parallelism) appears to be sequential so that it is unlikely to be amenable to parallel processing techniques. Overhead alone would then place an upper limit on throughput of five to seven times the sequential processing rate, even if the housekeeping were done in a separate processor...At any point in time it is difficult to foresee how the previous bottlenecks in a sequential computer will be effectively overcome.

What we can deduce from this is that in parallelization in general, there is always some part in the program that will be sequential. As problems get bigger and bigger, the sequential part will become more important and eventually place an upper limit on the maximal solution speed. We will demonstrate this upper limit in one of the next sections.

taken from:
http://tinyurl.com/nn2h8
There's a bit of Metablade in all of us.
2Green
Posts: 1503
Joined: Thu Sep 23, 1999 6:01 am
Location: on the path.

Post by 2Green »

Jim:

The exerpts you quoted "from 2Green" were not mine, they were from IBM.
I just pasted the article FYI.

NM
The music spoke to me. I felt compelled to answer.
User avatar
-Metablade-
Posts: 1195
Joined: Fri Nov 04, 2005 4:54 pm

Post by -Metablade- »

2Green wrote:
More digital prosthetics! We live in interesting times.

NM
That reminds me of an old Chinese curse which goes like this:
"May you live in interesting times."
LOL
:lol: :lol: :lol:
There's a bit of Metablade in all of us.
Post Reply

Return to “Bill Glasheen's Dojo Roundtable”