Technology AI dangerous as Nukes!

Debi

Owner/Admin
Staff
Joined
Sep 16, 2013
Messages
242,074
Reaction score
235,492
Points
315
Location
South of Indy
http://www.dailymail.co.uk/sciencet...oneer-warns-smart-computers-doom-mankind.html

'Artificial Intelligence is as dangerous as NUCLEAR WEAPONS': AI pioneer warns smart computers could doom mankind
  • Expert warns advances in AI mirrors research that led to nuclear weapons
  • He says AI systems could have objectives misaligned with human values
  • Companies and the military could allow this to get a technological edge
  • He urges the AI community to put human values at the centre of their work


GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS
Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is 'number one risk for this century,' and believes it could play a part in human extinction.

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.'

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.
__________________________________________________________________________________

SEE??? Other people believe my warnings!!! Keep thinking Terminator!
 
http://www.dailymail.co.uk/sciencet...oneer-warns-smart-computers-doom-mankind.html

'Artificial Intelligence is as dangerous as NUCLEAR WEAPONS': AI pioneer warns smart computers could doom mankind
  • Expert warns advances in AI mirrors research that led to nuclear weapons
  • He says AI systems could have objectives misaligned with human values
  • Companies and the military could allow this to get a technological edge
  • He urges the AI community to put human values at the centre of their work


GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS
Google has set up an ethics board to oversee its work in artificial intelligence.

The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.

One of its founders warned artificial intelligence is 'number one risk for this century,' and believes it could play a part in human extinction.

'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.

Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.'

The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.
__________________________________________________________________________________

SEE??? Other people believe my warnings!!! Keep thinking Terminator!
My grandpa already thinks this has happened because of how many jobs have become automated. What sent him over the edge was robotic self checkout.
 
  • Like
Reactions: Debi
My grandpa already thinks this has happened because of how many jobs have become automated. What sent him over the edge was robotic self checkout.
Which continues to send me over the edge as well! I hate those machines and they truly hate me.....!!!
 
Which continues to send me over the edge as well! I hate those machines and they truly hate me.....!!!
He's not Amish, but he might as well be. He likes air conditioning and needs light to read, enjoys his car & the History channel- but has no cell phone no computer of any kind. Hasn't ever & won't even try. He gets mad at credit cards & anytime anyone asks for his email.
 
  • Like
Reactions: Debi
He's not Amish, but he might as well be. He likes air conditioning and needs light to read, enjoys his car & the History channel- but has no cell phone no computer of any kind. Hasn't ever & won't even try. He gets mad at credit cards & anytime anyone asks for his email.
OK, I don't do the cell phone thing but for entirely different reasons. I rarely need the use of one, and I hate the darn thing ringing all the time. If I have need of talking to someone, I shall pick up the landline.
 
  • Like
Reactions: Indigo
I hate those machines and they truly hate me.....!!!
Funny story. I went to Wally world one night to get a cake. It was marked 6.99 on the top. I went to one of those machines and ran it under the scanner,it comes up 12.99. I called one of the kids over to tell him that is the wrong price. He agrees and says it has a sticker on the bottom also and i have to scan the sticker on top. I asked him how do i do that and he gives me a stupid grin like i am an idiot and says' turn it over"...........It is a birthday cake with nice butter cream icing. I looked at him for a second or two so the neurons would fire in sequence this time.......no luck. I looked over to my wife and she knew what i was going to do. I said to the kid , "oh yeah ,turn it over'. I flipped that puppy over and moved it from side to side like i was doing a shake and bake. It scanned the correct price and i flipped it back over. What a mess. He actually says "see, the correct price". My wife is just getting back from the bakery with the same cake and i stick that one in a bag and hand the mess to the kid.......he says he is calling the manager. I told him to tell his manager what happened and everything will be ok. Manager ran outside to apologize for the mess. I will give the kid credit for being honest.
 
Funny story. I went to Wally world one night to get a cake. It was marked 6.99 on the top. I went to one of those machines and ran it under the scanner,it comes up 12.99. I called one of the kids over to tell him that is the wrong price. He agrees and says it has a sticker on the bottom also and i have to scan the sticker on top. I asked him how do i do that and he gives me a stupid grin like i am an idiot and says' turn it over"...........It is a birthday cake with nice butter cream icing. I looked at him for a second or two so the neurons would fire in sequence this time.......no luck. I looked over to my wife and she knew what i was going to do. I said to the kid , "oh yeah ,turn it over'. I flipped that puppy over and moved it from side to side like i was doing a shake and bake. It scanned the correct price and i flipped it back over. What a mess. He actually says "see, the correct price". My wife is just getting back from the bakery with the same cake and i stick that one in a bag and hand the mess to the kid.......he says he is calling the manager. I told him to tell his manager what happened and everything will be ok. Manager ran outside to apologize for the mess. I will give the kid credit for being honest.
LOL!!! OMG! That is a GREAT example of today's kids! LOVED the way you handled that!!!
 
  • Like
Reactions: sal and Indigo
Funny story. I went to Wally world one night to get a cake. It was marked 6.99 on the top. I went to one of those machines and ran it under the scanner,it comes up 12.99. I called one of the kids over to tell him that is the wrong price. He agrees and says it has a sticker on the bottom also and i have to scan the sticker on top. I asked him how do i do that and he gives me a stupid grin like i am an idiot and says' turn it over"...........It is a birthday cake with nice butter cream icing. I looked at him for a second or two so the neurons would fire in sequence this time.......no luck. I looked over to my wife and she knew what i was going to do. I said to the kid , "oh yeah ,turn it over'. I flipped that puppy over and moved it from side to side like i was doing a shake and bake. It scanned the correct price and i flipped it back over. What a mess. He actually says "see, the correct price". My wife is just getting back from the bakery with the same cake and i stick that one in a bag and hand the mess to the kid.......he says he is calling the manager. I told him to tell his manager what happened and everything will be ok. Manager ran outside to apologize for the mess. I will give the kid credit for being honest.
So what we have here is a demonstration that the future is our greatest danger.

Imagine a nice helpful C3PO - "I'm sorry sir, let me get that for you sir!"
 
  • Like
Reactions: sal and Debi
Definition of terms aside (AI covers quite a few fields - LAI (Limited AI), of the sort of helper agents that Google users have access to; AI, which is not necessarily sapient or sentient; AGI (Artificial General Intelligence), which is the human-equivalent AI that people usually think of when they hear the term; some other variant of software-based intelligence which may not be recognizably human-like in any way)... nobody really knows if AGI is even possible. We do not yet have a functional, working definition of the term 'consciousness', nor does it appear that we will any time soon. LAI by definition is highly domain-specific and getting it to misbehave is tricky. Moreover, most of the time it misbehaves within the context of its functional domain.

AI software is similarly constrained in what it can do. There is a certain amount of software engineering involved, this is true, but there is also a certain amount of training involved that modifies working parameters of the software prior to it being frozen into its current state (viz, the work Syntience does with artificially intelligence software agents). Trying to get such an AI construct to lash out would probably result in a highly constrained tantrum (your smart refrigerator could at worst run your power bill up by freezing everything solid, or shut off and cause everything to go bad). This is not to say that there are no edge cases (a smart pacemaker causing cardiac arrest), only that the software is constrained by what it knows how to do and the peripherals interfaced with it.

As for AGI deciding to attack the human race (assuming that it could be created), humans cannot even treat each other with kindness and decency on the Net most of the time, so it would stand to reason that the first AGI exposed to the Net would probably be trolled so hard that it might fly off the handle. Could anyone really blame it?
 
  • Like
Reactions: Indigo
Definition of terms aside (AI covers quite a few fields - LAI (Limited AI), of the sort of helper agents that Google users have access to; AI, which is not necessarily sapient or sentient; AGI (Artificial General Intelligence), which is the human-equivalent AI that people usually think of when they hear the term; some other variant of software-based intelligence which may not be recognizably human-like in any way)... nobody really knows if AGI is even possible. We do not yet have a functional, working definition of the term 'consciousness', nor does it appear that we will any time soon. LAI by definition is highly domain-specific and getting it to misbehave is tricky. Moreover, most of the time it misbehaves within the context of its functional domain.

AI software is similarly constrained in what it can do. There is a certain amount of software engineering involved, this is true, but there is also a certain amount of training involved that modifies working parameters of the software prior to it being frozen into its current state (viz, the work Syntience does with artificially intelligence software agents). Trying to get such an AI construct to lash out would probably result in a highly constrained tantrum (your smart refrigerator could at worst run your power bill up by freezing everything solid, or shut off and cause everything to go bad). This is not to say that there are no edge cases (a smart pacemaker causing cardiac arrest), only that the software is constrained by what it knows how to do and the peripherals interfaced with it.

As for AGI deciding to attack the human race (assuming that it could be created), humans cannot even treat each other with kindness and decency on the Net most of the time, so it would stand to reason that the first AGI exposed to the Net would probably be trolled so hard that it might fly off the handle. Could anyone really blame it?
I thought I just saw you dissolve in a shower of multi-colored pixel dust?