|
Monkey with a mouse
Join Date: Oct 2000
Location: SoCal
Posts: 6,006
|
The "Singularity" is loosely defined as the moment that machine intelligence surpasses human intelligence and the machines begin improving themselves via AI. The argument says that after this point the machines will gain an almost unimaginable level of intelligence . . . and keep improving.
Ray Kurzweil has a great site and some interesting things to say about AI and the Singularity here:
http://www.kurzweilai.net/index.html?flash=1
As Chris (Legion) said previously about science-fiction writers - they tend to underestimate hardware advances and overestimate software advances. Kurzweil has actually been very accurate re hardware advances, but has been too optimistic regarding software, IMO.
Here's an excerpt, from the forward of a book entitled "The Intelligent Universe" by James Gardner, written by Kurzweil:
Quote:
By 2029, sufficient computation to simulate the entire human brain, which I estimate at about 10^16 (10 million billion) calculations per second (cps), will cost about a dollar. By that time, intelligent machines will combine the subtle and supple skills that humans now excel in (essentially our powers of pattern recognition) with ways in which machines are already superior, such as remembering trillions of facts accurately, searching quickly through vast databases, and downloading skills and knowledge.
But this will not be an alien invasion of intelligent machines. It will be an expression of our own civilization, as we have always used our technology to extend our physical and mental reach. We will merge with this technology by sending intelligent nanobots (blood-cell-sized computerized robots) into our brains through the capillaries to intimately interact with our biological neurons. If this scenario sounds very futuristic, I would point out that we already have blood-cell-sized devices that are performing sophisticated therapeutic functions in animals, such as curing Type I diabetes and identifying and destroying cancer cells. We already have a pea-sized device approved for human use that can be placed in patients’ brains to replace the biological neurons destroyed by Parkinson’s disease, the latest generation of which allows you to download new software to your neural implant from outside the patient.
If you consider what machines are already capable of, and apply a billion-fold increase in price-performance and capacity of computational technology over the next quarter century (while at the same time we shrink the key features of both electronic and mechanical technology by a factor of 100,000), you will get some idea of what will be feasible in 25 years.
By the mid-2040s, the nonbiological portion of the intelligence of our humanmachine civilization will be about a billion times greater than the biological portion (we have about 10^26 cps among all human brains today; nonbiological intelligence in 2045 will provide about 10^35 cps). Keep in mind that, as this happens, our civilization will be become capable of performing more ambitious engineering projects. One of these projects will be to keep this exponential growth of computation going. Another will be to continually redesign the source code of our own intelligence. We cannot easily redesign human intelligence today, given that our biological intelligence is largely hard-wired. But our future—largely nonbiological—intelligence will be able to apply its own intelligence to redesign its own algorithms.
So what are the limits of computation? I show in my book that the ultimate one-kilogram computer (less than the weight of a typical notebook computer today) could perform about 10^42 cps if we want to keep the device cool, and about 10^50 cps if we allow it to get hot. By hot, I mean the temperature of a hydrogen bomb going off, so we are likely to asymptote to a figure just short of 10^50 cps. Consider, however, that by the time we get to 10^42 cps per kilogram of matter, our civilization will possess a vast amount of intelligent engineering capability to figure out how to get to 10^43 cps, and then 10^44 cps, and so on.
So what happens then? Once we saturate the ability of matter and energy to support computation, continuing the ongoing expansion of human intelligence and knowledge (which I see as the overall mission of our human-machine civilization), will require converting more and more matter into this ultimate computing substrate, sometimes referred to as “computronium.”
What is that limit? The overall solar system, which is dominated by the sun, has a mass of about 2 × 10^30 kilograms. If we apply our 10^50 cps per kilogram limit to this figure, we get a crude estimate of 10^80 cps for the computational capacity of our solar system. There are some practical considerations here, in that we won’t want to convert the entire solar system into computronium, and some of it is not suitable for this purpose anyway. If we devoted 1/20th of 1 percent (.0005) of the matter of the solar system to computronium, we get capacities of 10^69 cps for “cold” computing and 10^77 cps for “hot” computing. I show in my book how we will get to these levels using the resources in our solar system within about a century.
I’d say that’s pretty rapid progress. Consider that in 1850, a state-of-the-art method to transmit messages was the Pony Express, and calculations were performed with an ink stylus on paper. Only 250 years later, we will have vastly expanded the intelligence of our civilization. Just taking the 10^69 cps figure, if we compare that to the 10^26 cps figure, which represents the capacity of all human biological intelligence today, that will represent an expansion by a factor of 10^43 (10 million trillion trillion trillion).
|
Best,
Kurt
Last edited by kstar; 01-28-2009 at 08:08 PM..
|