Pelican Parts Forums

Pelican Parts Forums (http://forums.pelicanparts.com/)
-   Off Topic Discussions (http://forums.pelicanparts.com/off-topic-discussions/)
-   -   Thoughts on Artificial Intelligence (http://forums.pelicanparts.com/off-topic-discussions/852385-thoughts-artificial-intelligence.html)

David 02-19-2015 05:52 AM

Thoughts on Artificial Intelligence
 
I just read the two articles on AI on Wait But Why and it's pretty interesting and a bit scary.

Here's the first one: The AI Revolution: Road to Superintelligence - Wait But Why

It's a long read but sure gets you thinking.

The premise is there are 3 levels of AI: Artificial Narrow Intelligence (ANI) which are things like Google, Spam Filters, etc., stuff we already have; Artificial General Intelligence (AGI) which would be on par with human intelligence, and Artificial Super Intelligence (ASI) which would be smarter than humans on about the same range we're smarter than ants.

Experts in the field expect us to reach AGI in this century, perhaps in the first half of this century. The scary part is a computer with AGI could potentially increase its intelligence so that it reaches ASI within a few years, days, or even hours of reaching AGI. And an ASI computer may not see much use for humans after that and have no moral reason to keep us. Or it may allow for humans to be immortal. Pretty interesting stuff.

MBAtarga 02-19-2015 06:50 AM

skynet....

scottmandue 02-19-2015 06:54 AM

We were looking at some videos here at work of a the robot dog running up and down hill... even climbing stairs... the tech on robots is really moving.

A.I. however is another can of worms... they have some very, very clever machines, but they still are just machines.

JD159 02-19-2015 06:54 AM

Quote:

Originally Posted by MBAtarga (Post 8494268)
skynet....

When I used to work for Futureshop / Best Buy, a thing called Skynet controlled the scheduling and heat of the building...

Not even kidding...

Z-man 02-19-2015 06:55 AM

In theory, yes - it is pretty interesting stuff. But practically speaking, an AI computer is essentially a relational database with computational algorithms that can make connections via the database and generates output based on certain criteria. Yes, a database can grow (provided there is sufficient storage available) with more input, but the computational algorithms can be a bit difficult to grow autonomously to the point where increased intelligence (or rather improved algorithms) can be measured. Therein lies the rub.

Too many people confuse the issue by perceiving a computer mimicking human behavior as intelligence. The computer is only following a set of algorithms, which is not what intelligence is.

I would not worry about a Terminator knocking on your door just yet...

-Z

930addict 02-19-2015 06:56 AM

Meh. If the computers get out of line just unplug them. This also works for windows os. Haha.

vash 02-19-2015 07:00 AM

this week? i'd be happy to see some "Actual" intelligence.

motion 02-19-2015 07:04 AM

I want a computer that does MI. Money Intelligence.

ckelly78z 02-19-2015 07:05 AM

Sarah Conners currently is doing military training in the desert, and maybe would should as well !

GH85Carrera 02-19-2015 07:25 AM

The entire AI bit is interesting.

People are legitimately concerned about computers and machines taking over. They all run on electricity or HAVE to have electrical connections to make the computer functional.

We can always unplug them, or turn off the electricity.

Terminator will never happen until there is a several thousand orders of magnitude improved power source developed. It defies physics in several way to have that much power in a small power source. In the movies the power cell used to run him would fit in the palm of your hand and power him through feats that would require several large generators with a long power cord and many gallons of gasoline.

rcooled 02-19-2015 07:27 AM

http://forums.pelicanparts.com/uploa...1424363243.jpg

David 02-19-2015 07:32 AM

The comments about unplugging them and they're just machines are covered in the articles. Of course it seems easy to stop but the problem is we're only seeing the issue with human intelligence, just like a monkey can understand that a building exists but can't comprehend that it was built by a human. We can't even fathom what a super intelligent being or computer could invent or do.

GH85Carrera 02-19-2015 07:32 AM

In the book, HAL functioned as designed. He was programmed and told flat out, nothing is to stop him from investigating the monolith. He just followed the instructions he was given. Just like James Bond, he had a license to kill and a mission to accomplish.

stomachmonkey 02-19-2015 07:33 AM

Quote:

Originally Posted by Z-man (Post 8494282)
In theory, yes - it is pretty interesting stuff. But practically speaking, an AI computer is essentially a relational database with computational algorithms that can make connections via the database and generates output based on certain criteria. Yes, a database can grow (provided there is sufficient storage available) with more input, but the computational algorithms can be a bit difficult to grow autonomously to the point where increased intelligence (or rather improved algorithms) can be measured. Therein lies the rub.

Too many people confuse the issue by perceiving a computer mimicking human behavior as intelligence. The computer is only following a set of algorithms, which is not what intelligence is.

I would not worry about a Terminator knocking on your door just yet...

-Z

Thanks for saving me some typing.

The difference in sentient intelligence and AI is sentient beings brains are adaptable and to some degree self wire, Neuroplasticity.

Think of your brain as a CPU. Adaptable, tunable.

Crowbob 02-19-2015 07:37 AM

Quote:

Originally Posted by Z-man (Post 8494282)
In theory, yes - it is pretty interesting stuff. But practically speaking, an AI computer is essentially a relational database with computational algorithms that can make connections via the database and generates output based on certain criteria. Yes, a database can grow (provided there is sufficient storage available) with more input, but the computational algorithms can be a bit difficult to grow autonomously to the point where increased intelligence (or rather improved algorithms) can be measured. Therein lies the rub.

Too many people confuse the issue by perceiving a computer mimicking human behavior as intelligence. The computer is only following a set of algorithms, which is not what intelligence is.

I would not worry about a Terminator knocking on your door just yet...

-Z

Seems like there's allot of confused human behavior perceived by people as mimicking intelligence, too. But as far as the definition of intelligence goes, has anyone determined that intelligence isn't following a set of algorithms? Couldn't learning be described as following a set of algorithms with increasing efficiency?

I also wonder at what point mimicry becomes authentically-learned behavior. Seems like at some point it becomes a semantics issue. Some machine winning at Jeopardy sure looks like intelligence and acts like intelligence to me. At one point memory was considered to be integral to human intelligence. Now with a capacity for gazilliobits of memory in machines, the definition of human intelligence had to change.

Z-man 02-19-2015 08:06 AM

Quote:

Originally Posted by Crowbob (Post 8494387)
Seems like there's allot of confused human behavior perceived by people as mimicking intelligence, too. But as far as the definition of intelligence goes, has anyone determined that intelligence isn't following a set of algorithms? Couldn't learning be described as following a set of algorithms with increasing efficiency?

I also wonder at what point mimicry becomes authentically-learned behavior. Seems like at some point it becomes a semantics issue. Some machine winning at Jeopardy sure looks like intelligence and acts like intelligence to me. At one point memory was considered to be integral to human intelligence. Now with a capacity for gazilliobits of memory in machines, the definition of human intelligence had to change.

Memory (ie the capacity to store data) is only the starting point of intelligence. What actions are done based on the data is the next step, and the step after that is improving the actions. The first two - a computer can do. The third one -- improving the action (ie: cognitive learning) is where computers are inefficient in many respects.

Memory is a very interesting concept, and there are two main schools of thought on it -- take a simple object like a chair. In your mind, you can easily picture what a chair is. But what is stored in your mind? Is it a collection of all types of chairs that you have seen? Or is it more of a relational memory - you understand a chair to be something with 3-4 legs, a platform to sit on, and an optional backrest and armrests. Computers can be programmed to store memory in both ways: the former takes up more memory (storage), but the latter requires more computations to arrive at some conclusion.

So - let's say an AI robot is able to recognize a chair. Fine. And he is programmed to sit in a chair once he finds one. That is the total extent of his knowledge -- once he sees a chair, he sits in the chair. Great - this can be easily accomplished even today. However - it would be very difficult for this robot to learn other things: what if the chair is on its side? How does the robot sit in the chair then? If he is programmed to maneuver his body to the chair's location, then he will likely lay down and align his body to the chair, rather than stand the chair upright and then sit on the chair. What if the chair is in pieces? Unless programmed to recognize the pieces and assemble the chair, the robot would not be able to construct a chair. What if there is no chair? How would the robot learn to sit instead on a ledge, table, or stoop? These are types of simple intelligence that are not trivial to solve.

-Z-man.

Crowbob 02-19-2015 08:24 AM

Great discussion, thank you Z.

What you are describing are the limitations we currently face. However, I am reminded that not long ago it was pretty much definitive that no machine could produce beautiful and novel art. Say, for discussion, a painting that satisfies our current notions of esthetics, even as individual and subjective as esthetics are.

Today, a drone could easily and remotely take a photo of a landscape, a woman or even a chair. Connect that image up to a printer and voyla an esthetically pleasing, novel image is created. Were one to show that image to someone from a century ago that person quite likely would not be able to comprehend that that beautiful, novel image was not produced by the hand of a human being.

Just as today, you ask how a robot could 'learn' to sit on a ledge or reassemble a chair from pieces. My friend from the past would ask very similar questions about the painting.

GH85Carrera 02-19-2015 08:54 AM

When I think of the computers having real artificial intelligence I think of that as having a self awareness, like Commander Data on Star Trek. Even the ship's computer could understand commands and complicated questions but it did not have an identify and was not cognizant.

While it is certainly possible to program a computer to control a camera and photograph a sunset by pointing west and the right time and even send it to the printer, it has no sense of what the photograph is. It might just be a sunset on a rainy day, nothing pretty at all.

I guess we can hope that when real self awareness is developed they can program in Asimov's three laws of robotics at the basis of the program.

Crowbob 02-19-2015 09:07 AM

Careful, Glen. You may be swerving back into a discussion of the soul. That issue having been resolved many times right here on PARF!

But back to the artificial artist.

There are many artists and a whole lot of art that isn't pretty at all or so I'm told, not being either an artist or pretty much to my dismay.

Z-man 02-19-2015 09:10 AM

Quote:

Originally Posted by Crowbob (Post 8494501)
Great discussion, thank you Z.

What you are describing are the limitations we currently face. However, I am reminded that not long ago it was pretty much definitive that no machine could produce beautiful and novel art. Say, for discussion, a painting that satisfies our current notions of esthetics, even as individual and subjective as esthetics are.

Today, a drone could easily and remotely take a photo of a landscape, a woman or even a chair. Connect that image up to a printer and voyla an esthetically pleasing, novel image is created. Were one to show that image to someone from a century ago that person quite likely would not be able to comprehend that that beautiful, novel image was not produced by the hand of a human being.

Just as today, you ask how a robot could 'learn' to sit on a ledge or reassemble a chair from pieces. My friend from the past would ask very similar questions about the painting.

That, in my opinion, is not AI -- it would be the next action that would be the starting point for AI. Let me explain...

Once the drone has established that a sunset picture falls within the parameters of esthetically pleasing landscape, how does the computer learn about other esthetically pleasing landscapes? Let's assume the computer realizes that a mountain is in his sunset picture, and extrapolates that mountains are esthetically pleasing (big stretch here, I know...). Ok - so the drone starts taking pictures of all types of mountains with the assumption that they are esthetically pleasing. Unfortunately, not all mountains and mountain shapes are pleasing, and the drone computer has no way of discerning between Mt. Rainier, Mt Fuju, a mound of garbage and a pile of steaming poo. All it knows is that a mountain shape is pleasing. One would have to program the AI with more parameters to resolve this -- but that's not AI, that's just humans improving the code...

Over time, humans have developed more intelligent machines. But that's not AI -- that's just humans improving the code that runs the machines.

And we haven't even begun to discuss topics like emotions, morality, ethics, violence and so on...

-Z

stomachmonkey 02-19-2015 09:55 AM

Quote:

Originally Posted by Z-man (Post 8494589)
That, in my opinion, is not AI -- it would be the next action that would be the starting point for AI. Let me explain...

Once the drone has established that a sunset picture falls within the parameters of esthetically pleasing landscape, how does the computer learn about other esthetically pleasing landscapes? Let's assume the computer realizes that a mountain is in his sunset picture, and extrapolates that mountains are esthetically pleasing (big stretch here, I know...). Ok - so the drone starts taking pictures of all types of mountains with the assumption that they are esthetically pleasing. Unfortunately, not all mountains and mountain shapes are pleasing, and the drone computer has no way of discerning between Mt. Rainier, Mt Fuju, a mound of garbage and a pile of steaming poo. All it knows is that a mountain shape is pleasing. One would have to program the AI with more parameters to resolve this -- but that's not AI, that's just humans improving the code...

Over time, humans have developed more intelligent machines. But that's not AI -- that's just humans improving the code that runs the machines.

And we haven't even begun to discuss topics like emotions, morality, ethics, violence and so on...

-Z

It's roughly 6 AM and I'm on a charter bus in Italy heading to Venice.

Bus pulls into a rest area for gas and to give everyone a chance to stretch their legs and grab some food.

My bus mates head into the McDonalds.

I decide I'd rather go hungry for the next two hours than eat that crap.

There's a super market next door.

My Italian is kinda weak but I head in anyway.

First thing I pass is Champagne, hmm, grab a bottle and head straight for the OJ. I have no idea where in this super market it is and the aisle signs are of no help but supermarkets are all different but at the same time the same. Ie; eggs, eggs are in the refrigerated dairy section even though the refrigerated dairy section location varies. I don't need to know where the OJ is I just need to find the section that contains associated items.

Anyway, grab some OJ and a bit of packaged ham, some cheese from the deli section and a couple of baguettes from the bread aisle.

I like mayo on my sandwich but I'm not buying an entire jar.

Pay up and head back to the McDonalds and hit the condiment section for a couple of packets of Mayo and snag a complimentary water cup for my Mimosas.

I assessed my condition, my options and devised a unique solution based on a one time situation.

John Rogers 02-19-2015 10:29 AM

We cover this in a couple of the introduction computer classes I teach and usually it is when someone asks about computers getting smarter than humans or why a computer acts strange suddenly or there seems to be a bug, etc, etc. My responses are like this.... as noted in earlier posts the computers do what we tell them to do, I.E. lines of computer code that execute instructions. Where the weirdness comes in is due to lack of memory or overwriting existing code in a memory area that is not supposed to be shared. In all cases the computer is not "thinking" but has run out of the possible choices to take action. In some cases when this happens the code can be reset such as used to happen with the ADA programming language where it would make decisions on past actions and actually came close to "thinking". This was supposed to help our first cruise missiles fly more accurately and actually be able to make corrections based on existing parameters and readings.

ADA got overcome by C++, JAVA and others and they are more towards regular programs that just do stuff. With the ADA programs I worked on, we still had to program in possible choices and possible actions but the choices were allowed to be flexible so it was a sort of AI you could say.

GH85Carrera 02-19-2015 10:42 AM

Quote:

Originally Posted by Crowbob (Post 8494585)
Careful, Glen. You may be swerving back into a discussion of the soul. That issue having been resolved many times right here on PARF!

But back to the artificial artist.

There are many artists and a whole lot of art that isn't pretty at all or so I'm told, not being either an artist or pretty much to my dismay.

One thing for sure, there is a LOT of "art" that is 100% crap or worse. Just like pornography, it is difficult to describe, but I know it when I see it.

The soul has nothing to do with it. If the machine is just looking for every possible permutation of a possible answer and picking the one it matches to a solution it is not really intelligent. Deep Blue beat the very best of the best human Jeopardy players. Try to get it to figure out how to pick a good color to paint the room to match the decorations.

Crowbob 02-19-2015 10:54 AM

In order for me to accept your analogy Z, I would also have to accept the notion that every painting I execute would necessarily be a masterpiece and replace the junk currently in the Louve one by one. But it probably won't (based solely on the knowledge that it hasn't-yet). All I know is that my painting is pleasing-even the one I have not yet painted. To what does that render my intelligence? Artificial? Human? Nonexistant?

Secondly, let us stipulate that my drone, Vermeer 2.0, sustained a power surge and created something more pleasing than the first Girl With A Pearl Earing. Are you saying that that painting is not the result of intelligence? If so, we would have to also eliminate the human terminology 'Happy Accident' from our vocabulary of the arts. But certainly there would be no need to improve the code and it may even be counterproductive to do so.

In truth, we need not concern ourselves with topics like emotions, morality, ethics, etc. because there certainly are intelligent humans without those qualities and many refer to them as artificial.

As I said before, and with which you disagree, at some point it becomes semantic.

Stomachmonkey: I could argue that your scenario is simply one of problem-solving. With enough data, a machine could locate and retrieve every one of your ingredients and enjoy a Tuscan picnic. But why would it? More probably the machine would be searching for a USB outlet to satisfy its hunger-or not. Your vignette was further interesting because just such a thing happened to me on the Autostrada beween Rome and Venice one fine September afternoon.

Crowbob 02-19-2015 11:06 AM

Glen,

It is likely that there never will be a human that could beat Big Blue AND pick the right color to go with the drapes. It would probably require two people with differnet skills. Similarly two machines hooked together with different codes could do it.

I just think the abyss between AI and human intelligence will become more and more narrow. It probably will never be bridged but at some point it won't matter.

scottmandue 02-19-2015 11:10 AM

Quote:

Originally Posted by 930addict (Post 8494286)
Meh. If the computers get out of line just unplug them. This also works for windows os. Haha.

But the iPhones will turn and plug the computers back in!

911SauCy 02-19-2015 11:23 AM

I defer, to Elon Musk

scottmandue 02-19-2015 11:29 AM

Quote:

Originally Posted by Z-man (Post 8494282)
In theory, yes - it is pretty interesting stuff. But practically speaking, an AI computer is essentially a relational database with computational algorithms that can make connections via the database and generates output based on certain criteria. Yes, a database can grow (provided there is sufficient storage available) with more input, but the computational algorithms can be a bit difficult to grow autonomously to the point where increased intelligence (or rather improved algorithms) can be measured. Therein lies the rub.

-Z

Talking to you makes my head hurt ;)

motion 02-19-2015 11:43 AM

Einstein said that human behavior could be defined and predicted mathematically. Do the math.

Crowbob 02-19-2015 11:50 AM

That Beautiful Mind guy got a Nobel Prize for doing the math. Then he went whimsical in the brain pan.

Z-man 02-19-2015 12:06 PM

Quote:

Originally Posted by Crowbob (Post 8494747)
In order for me to accept your analogy Z, I would also have to accept the notion that every painting I execute would necessarily be a masterpiece and replace the junk currently in the Louve one by one. But it probably won't (based solely on the knowledge that it hasn't-yet). All I know is that my painting is pleasing-even the one I have not yet painted. To what does that render my intelligence? Artificial? Human? Nonexistant?

Your definition of a masterpiece is subjective. The concept of subjectivity is an analog concept. In the digital world, there is no room for subjectivity between the zeros and ones. The human intelligence concepts I've mentioned (emotions, morals, ethics...etc.) are subjective concepts, and also exist between the peaks (1's) and valleys (0's) of the digital world. Simply put, subjectivity does not exist in the digital world. This is the gap or abyss you refer to in a subsequent post.

Quote:

Secondly, let us stipulate that my drone, Vermeer 2.0, sustained a power surge and created something more pleasing than the first Girl With A Pearl Earing. Are you saying that that painting is not the result of intelligence? If so, we would have to also eliminate the human terminology 'Happy Accident' from our vocabulary of the arts. But certainly there would be no need to improve the code and it may even be counterproductive to do so.
Random events can have a beautiful element to them. Just look up at the stars on a dark night. And - subjectively speaking, it is possible that Vermeer 2.0 can create a more beautiful masterpiece. HOWEVER - Vermeer 2.0 has no idea that his creation is more beautiful - thus on its own, it is unable to 'learn' how to continue to make things more and more beautiful. Sure - if given enough time, it can create something more beautiful - but that falls under the realm of random events, not intelligent learning. The creation of a beautiful thing, and the acknowledgement of it, and the application of that experience to further create more beautiful things are all separate concepts. If the first is mastered, one cannot assume that the other two follow suit.
Quote:

In truth, we need not concern ourselves with topics like emotions, morality, ethics, etc. because there certainly are intelligent humans without those qualities and many refer to them as artificial.

As I said before, and with which you disagree, at some point it becomes semantic.
I disagree. It is these analog characteristics of humans (emotions, morality, ethics...etc) that enable us to further learn and develop our intelligence. And these aspects are inherent in all humans - some experience these on a lesser scale - but they are still there.

Quote:

Stomachmonkey: I could argue that your scenario is simply one of problem-solving. With enough data, a machine could locate and retrieve every one of your ingredients and enjoy a Tuscan picnic. But why would it? More probably the machine would be searching for a USB outlet to satisfy its hunger-or not. Your vignette was further interesting because just such a thing happened to me on the Autostrada beween Rome and Venice one fine September afternoon.
That reminds me - I gotta start planning our summer European vacation soon... :D

-Z

911_Dude 02-19-2015 01:17 PM

That article was a long but very interesting read. Thanks

930addict 02-19-2015 02:36 PM

I used to work for a company that was developing AI API's. I was just a sys admin but it was there that I got the bug to go back to school as I was the only one there without a Phd and they were doing some really cool stuff. For fun they let me take the test that they gave to AI Scientist applicants and was surprised that I got 78% with no background which they said was better than most of their applicants. LOL. But I digress. Anyhow that was back in the 90's.

One of their demos was a racetrack that allowed you to change the track on the fly and the car would adjust its course using pattern recognition. They also developed software for OCR and Voice recognition. In fact Dragon Naturally speaking, at one time, used their AI engine. These types of things are based on patterns. Indeed many of the smart computers or even self driving cars are built around patterns and things that can be quantified/analyzed.

To build on what Z-man already stated. A human can look at a folding chair and know that they can open it and sit on it. It might take some investigative work but a human that had never seen a chair would figure it out. As an example, I have a video of my girls playing with a soccer ball and a volleyball when they were very little - I don't know the exact age but neither of them were talking yet. In the video after kicking the balls around they placed their balls on the ground, sat on the balls, faced each other and proceeded to talk gibberish to each other. They knew the balls were there to play with but there is a human quality that told them that they can also use it as something to sit on. That quality is reasoning. What computers lack is the ability to reason and computer reasoning is just one field of AI research that is extemely complicated involving fuzzy logic and such. Also, curiosity is what drives humans to learn things. Computers work in numbers so how do you create and algorithm for curiosity?

A cool read on the subject is Gödel, Escher, Bach by Douglas Hofstadter. It's not an easy read but it is interesting regarding patterns and making meaning out the meaningless and how he sees AI playing out.

Nostril Cheese 02-19-2015 04:07 PM

Quote:

Originally Posted by 930addict (Post 8495162)

A cool read on the subject is Gödel, Escher, Bach by Douglas Hofstadter. It's not an easy read but it is interesting regarding patterns and making meaning out the meaningless and how he sees AI playing out.

Thats one of the coolest damn books I've ever read.

Crowbob 02-19-2015 05:38 PM

It has become clear to me I am waaaaay out of my league with you guys! Thank you all for your patience.

Nevertheless, I find comfort that there are minds like yours who will lead (or at least be near the front of the pack) in the pursuit of the discovery of the nature of man.

I, on the other hand, am satisfied enough to enjoy the knowledge that the pursuit persists.

Crowbob 06-29-2015 03:26 PM

Quote de Z-man, post 31 above: "The human intelligence concepts I've mentioned (emotions, morals, ethics...etc.) are subjective concepts, and also exist between the peaks (1's) and valleys (0's) of the digital world. Simply put, subjectivity does not exist in the digital world. This is the gap or abyss you refer to in a subsequent post."

Well I hope somebody gets those algorithms before they get us:

Threat from Artificial Intelligence not just Hollywood fantasy - Telegraph

"Dr Armstrong [Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University], who was speaking at a debate on artificial intelligence organised in London by the technology research firm Gartner, warns that it will be difficult to tell whether a machine is developing in a benign or deadly direction...

"As AIs get more powerful anything that is solvable by cognitive processes, such as ill health, cancer, depression, boredom, becomes solvable," he says. "And we are almost at the point of generating an AI that is as intelligent as humans."

930addict 06-29-2015 03:53 PM

Here's a more realistic/less hyped view on AI from people that aren't trying to sell books.

You've read the hype, now read the truth. How close is the AI apocalypse? | GeekTime

;-)

flatbutt 06-29-2015 04:47 PM

Quote:

Originally Posted by Crowbob (Post 8495445)
It has become clear to me I am waaaaay out of my league with you guys! Thank you all for your patience.

.

Baloney!

Here's an older work of fiction you might enjoy;
https://en.wikipedia.org/wiki/The_Adolescence_of_P-1

Crowbob 06-29-2015 05:08 PM

Less hyped/more realistic? Clearly that depends. Apparently we have differing views from various scientists. Perhaps we need a consensus to settle the science. After all, it is called artificial intelligence. We're having a difficult enough time trying to determine what authentic intelligence so whose to say which intelligence is artificial (machines) and which is authentic (human)? Here are some interesting quotes from the Geektime piece.

"In response to Geektime’s query about Musk’s comments, Lanier said, “Elon is [in a sense] correct because if people believe in AI enough, then in practice, events can unfold as if AI is real

It also justifies a world in which we put algorithms on a pedestal and believe they will solve all our problems. Jaron Lanier compares it to a religion:

“In the history of organized religion,” he told Edge.org, “it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.”

“That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else…contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.”

Or like the economic effects of that other new religion of Global Warming? The science of global warming is settled just as the science of AI is, apparently.

930addict 06-29-2015 06:37 PM

Correct. I was just putting out there that not everyone thinks AI is going to destroy the planet tomorrow. The fear mongering articles have about as much meaning as the geologists that say California can have a massive earthquake in the next 30 years. I don't need a P.H.d to come to that conclusion. Same logic applies here.

How do we define intelligence? The type of intelligence needed to take over the human race or make us obsolete is a type of intelligence that I don't have to worry about in my lifetime. In fact, I think it's so far off we have a better chance of colonizing another planet. Now making smart tools to handle everyday things is surely attainable and will drive a new economy. But a conscious technology that will make it's own decisions and act in its self interest I think not.

I have fiddled (Not seriously) with neuro networks and computer vision. Far far far from any type of AI scientist/engineer but I think it's cool stuff and so on occasion in my down time I fiddle. There are API's that pick out patterns. The system has no idea what the data represents. It just knows the statistical side of certain points of a photograph. For example, my family all have the same eye/bridge of nose shape. The facial recognition frequently gets my daughters mixed up. The intelligence is not really AI, it's smart technology that uses statistics recognizing patterns but it is not smart enough to cognitively decide - "hey what other element can I look for to match these two pictures?" A programmer can add the code to say, "hey if user corrects more than three times then scan the rest of the photograph and look for other marks like freckles, which are defined by function F and birth marks, which are defined by function B". The self awareness, consciousness stuff is what is lacking and really I can't see ever coming to fruition.

Incidentally, neuronetworks are really cool but are slow to learn. This is one of the reasons they lost popularity in the AI circle until recently as graphics chips have proven to be faster with computations. But even with that they are slow and require assigning weights to inputs. It gets complicated and big real fast. Every permutation of every input needs to be analyzed and weighted depending on how deep the network is it could take a long time to solve even the simplest of problems.

Another area that is lacking is storage. Where is an independent machine going to store all the data needed to make conscious decisions? Storage technology IMHO, has not been able to keep up with what's needed by super smart machines. I spoke to a Manufacturing engineer on one of my flights who works with Micron and he was telling me all about some of the advancements they are making in storage. It's pretty cool stuff but there are physical limitations that we need to overcome to put more bits in smaller areas. Then there' the issue of powering the cybernetic beasts. We've got a long way to go before we start fearing our smart technology. Capitalize on it? Absolutely. New economy? Absolutely. SkyNet? Nope. I think to get to that point we will need to use DNA based organisms like what these people did:Artificial Intelligence Created Using Human DNA

Those are my .02 and I'm sticking to it. SmileWavy


All times are GMT -8. The time now is 01:39 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
Search Engine Optimization by vBSEO 3.6.0
Copyright 2025 Pelican Parts, LLC - Posts may be archived for display on the Pelican Parts Website


DTO Garage Plus vBulletin Plugins by Drive Thru Online, Inc.