Quote:
Originally Posted by Z-man
In theory, yes - it is pretty interesting stuff. But practically speaking, an AI computer is essentially a relational database with computational algorithms that can make connections via the database and generates output based on certain criteria. Yes, a database can grow (provided there is sufficient storage available) with more input, but the computational algorithms can be a bit difficult to grow autonomously to the point where increased intelligence (or rather improved algorithms) can be measured. Therein lies the rub.
Too many people confuse the issue by perceiving a computer mimicking human behavior as intelligence. The computer is only following a set of algorithms, which is not what intelligence is.
I would not worry about a Terminator knocking on your door just yet...
-Z
|
Seems like there's allot of confused human behavior perceived by people as mimicking intelligence, too. But as far as the definition of intelligence goes, has anyone determined that intelligence isn't following a set of algorithms? Couldn't learning be described as following a set of algorithms with increasing efficiency?
I also wonder at what point mimicry becomes authentically-learned behavior. Seems like at some point it becomes a semantics issue. Some machine winning at Jeopardy sure looks like intelligence and acts like intelligence to me. At one point memory was considered to be integral to human intelligence. Now with a capacity for gazilliobits of memory in machines, the definition of human intelligence had to change.