How close are we?

Debi

Owner/Admin
Staff
Joined
Sep 16, 2013
Messages
241,738
Reaction score
234,654
Points
315
Location
South of Indy
How Close Are We to Kubrick's AI-Controlled Vision of the Future?

How Close Are We to Kubrick's AI-Controlled Vision of the Future?



aHR0cDovL3d3dy5saXZlc2NpZW5jZS5jb20vaW1hZ2VzL2kvMDAwLzEwMi8yMjMvb3JpZ2luYWwvaGFsLXNwYWNlLW9keXNzZXkuanBn

A murderous computer named HAL in the film "2001: A Space Odyssey" (1968).
Credit: Warner Bros. Pictures
"I'm sorry Dave, I'm afraid I can't do that."

Movie audiences first heard these calmly intoned and ominous words in 1968, spoken by a spaceship's intelligent computer in the science-fiction masterpiece "2001: A Space Odyssey." With that one phrase, the computer named HAL 9000 confirmed that it could think for itself, and that it was prepared to terminate the astronauts who were planning to deactivate it.

Fifty years after director Stanley Kubrick released his visionary masterpiece of space colonization, how close are humans to the future that he imagined, in which we partner with artificial intelligence (A.I.) that we ultimately may not be able to control?

Full story at sit
__________________________________

So, you guys in the techie world! Just how close ARE we? Are we at the level of HAL now? How about Skynet from terminator? Is it possible?
 
  • Like
Reactions: Paintman
I must admit I love my technology but how far do we really need to go. I have skynet fears too
 
Last edited:
So, you guys in the techie world! Just how close ARE we? Are we at the level of HAL now? How about Skynet from terminator? Is it possible?

We're very close to systems that look and feel like that. We are quite far away from systems that actually are that.

Computers are best described as "the world's fastest idiot." Ever tried to do something, and what the computer did isn't what you wanted, and anyone looking on could have told what you meant? That's because the computer had been told that, when X happens, do Y. It wasn't told BY YOU what to do, but by the person(s) whoe wrote the software that's running behind the scenes. So the person either made a mistake, or else they didn't consider you might want the computer to do X instead of Y at a given place in a task stream. A computer will do anything you tell it to do, if it knows either a) how to do it or b) if it has a collection of things it knows how to do and you've made it so it recognizes the end state.

That last is very, VERY important.

The way current AI actually works is that a system is given one or more tools, and told how the tools work (often they're also given sample ways in which the tool can be used). Then the AI part of the code is written, where the system can attempt various ways to use the tool to reach and end state, OR can experiment with the way the tool works to produce and store its own end state, such that if an operator says "I want this condition" it can either retrieve the already processed steps or build new steps based on what it knows the tool can do. But you know what you can't do? You can't give it a new tool with no explanation.

Suppose you gave a computer access to a robotic claw and told it how the claw worked. It can experiment with the claw to pick things up, move things around, even give you a high-claw. You could tell it that you wanted 3 things stacked on top of one another, and it could work out how to pick up the things, how to put one on top of the other, and even that the big thing needs to go on the bottom for the smaller things to stay on top. But if you tols it a nail needed to go into a board, it couldn't do it, because that's not what the claw is for. If you just dropped a hammer next to the board and nail, it couldn't learn to use the hammer to pound the nail in UNLESS you had already primed it as part of telling it 'what the claw can do.'

This is all because the computer cannot imagine. And we lack the ability to program it to imagine because we don't know how that works, either. We can create illusions when dealing with computers and AI if we spend sufficient time and resources to do so, but we don't have the hardware, software, or understanding to create something that can become spontaneously sentient. If you ever have the chance to be present when someone is showing off their "learning AI", ask it to do something completely nonsensical. Just be ready to get thrown out for showing the limitations of the "smart" system.
 
  • Like
Reactions: Debi
We're very close to systems that look and feel like that. We are quite far away from systems that actually are that.

Computers are best described as "the world's fastest idiot." Ever tried to do something, and what the computer did isn't what you wanted, and anyone looking on could have told what you meant? That's because the computer had been told that, when X happens, do Y. It wasn't told BY YOU what to do, but by the person(s) whoe wrote the software that's running behind the scenes. So the person either made a mistake, or else they didn't consider you might want the computer to do X instead of Y at a given place in a task stream. A computer will do anything you tell it to do, if it knows either a) how to do it or b) if it has a collection of things it knows how to do and you've made it so it recognizes the end state.

That last is very, VERY important.

The way current AI actually works is that a system is given one or more tools, and told how the tools work (often they're also given sample ways in which the tool can be used). Then the AI part of the code is written, where the system can attempt various ways to use the tool to reach and end state, OR can experiment with the way the tool works to produce and store its own end state, such that if an operator says "I want this condition" it can either retrieve the already processed steps or build new steps based on what it knows the tool can do. But you know what you can't do? You can't give it a new tool with no explanation.

Suppose you gave a computer access to a robotic claw and told it how the claw worked. It can experiment with the claw to pick things up, move things around, even give you a high-claw. You could tell it that you wanted 3 things stacked on top of one another, and it could work out how to pick up the things, how to put one on top of the other, and even that the big thing needs to go on the bottom for the smaller things to stay on top. But if you tols it a nail needed to go into a board, it couldn't do it, because that's not what the claw is for. If you just dropped a hammer next to the board and nail, it couldn't learn to use the hammer to pound the nail in UNLESS you had already primed it as part of telling it 'what the claw can do.'

This is all because the computer cannot imagine. And we lack the ability to program it to imagine because we don't know how that works, either. We can create illusions when dealing with computers and AI if we spend sufficient time and resources to do so, but we don't have the hardware, software, or understanding to create something that can become spontaneously sentient. If you ever have the chance to be present when someone is showing off their "learning AI", ask it to do something completely nonsensical. Just be ready to get thrown out for showing the limitations of the "smart" system.
OK, that makes sense. But now I have to ask about the recent Google (I think) issue where they had to shut down a system because it was "creating" it's own language and no one knows why? What is that about?
 
OK, that makes sense. But now I have to ask about the recent Google (I think) issue where they had to shut down a system because it was "creating" it's own language and no one knows why? What is that about?
This is what came to my mind also. I was under the impression the new AI was able to do problem solving which is reasoning.
 
  • Like
Reactions: Knockoff
I looked up the Google thing, and it appears to be some conflation of several events. One, Google shut down their social media because literally anyone with programming knowledge could get all your personal information. That's going to end up causing some legislation, as they sat on it for months for, and I quote "fear of being regulated." Sounds like that's exactly what they need.

The translate had a 16 page technical document I'm not about to wade through entirely, but the Q&D seems to be that they removed the ability of the program to use English as a marker between languages - say if you wanted to go French to Hebrew or something, before it would translate French to English, then English to Hebrew. So since the AI understood using markers, it started creating its own symbols for concepts so it could do direct translations. So if it knew that 'faim' meant 'hungry', it created a symbol for itself so that it had an index of concepts for direct translations. It literally just created a substitute for using English.

And yes, AI's can solve problems. But they can't work out new tools. They can only solve problems with the set of information they have. It's no different from your pet opening the door to get at something. If you lock it, they aren't going to figure out the use of a key.
 
  • Like
Reactions: Lynne
I hope your right Ronin because we don’t need something else to worry about.