AI creates it's own "child"

AI is the revolution of technology (along with quantum-based encryption) and could have incredible outcomes. Antiviruses are starting to use them and at an enterprise level, AI-based antiviruses are one of the first to constantly maintain 100% protection compliance on workstations (desktops and laptops). They also keep track of malware detections and learn from the actions taken to eventually determine which files are actually malicious or not. Most traditional AVs struggle to do all this unless explicitly told to.

But the potential AI has to help the world is remarkable, with prospects in the medical field, air travel, process automation, and road safety just to name a few. However my concern is how their vulnerabilities can be exploited and depending on the AI technology, it could create some serious risks in such fields.

But Goolge's AI is not the first, as I recall seeing a documentary on robotics and the Japanese developed a robot that could analyze objects and whatever you told it that object was it would repeat the name of the object each time it saw something of that object's shape. Not as sophisticated as Google's AI I suppose but still. But I think by calling this an AI child kind of misleads readers. By the sounds of it, this is a computer that can identify objects in images and videos which is something Google Photos has been doing for some time. I think if they report more details on the technology they developed it would be a bit clearer. To me it seems a little ambiguous as to what they have actually created and how it's unique or a breakthrough.
 
  • Like
Reactions: Lynne and Seahunter
I agree with you Vibe-Feeler. I think a lot of AI can be used for good reasons which could save lives, increase productivity, etc. The problem comes when, not if, someone exploits the AI for some nefarious reason(s) for financial gain or simply for their own amusement. Unfortunately, the more power and control given over to AI, the more serious and dangerous the implications will be if it is hacked into by the bad guys.
 
  • Like
Reactions: Lynne
AI-based antiviruses are one of the first to constantly maintain 100% protection compliance on workstations (desktops and laptops). They also keep track of malware detections and learn from the actions taken to eventually determine which files are actually malicious or not. .


until someone creates the AI based virus / malware.........
 
Or until it becomes Skynet and decides we humans are insects and exterminate us. (Honestly, did no one watch that movie??? :eek:)

doesn't have to become skynet, the military communications satellites for nato and allied forces has been named skynet since the late 60's....
it just has to become self aware......
 
until someone creates the AI based virus / malware.........

A virus is malware (sorry I'm quite pedantic about that kinda stuff in my line of work lol). Although your concept is true, creating such malware may not be practical. The way AI improves itself is from learning. It has to undergo certain processes or experiences, and stores the data captured each time.

From that data it can improve itself and perform better. The security world is constantly kept on it toes with malware (especially after the WannaCry outbreak). As soon as a new malware is released security companies create signatures/definitions of the malicious files which they then send out to all users using their antivirus (that's why always keep your AV up-to-date), and the AV will use these definitions to detect the malware on computers and remediate accordingly. These definitions get updated by their respective vendors on a daily basis.

The other thing is if the malware does not rely on human error via spam emails with malicious attachments or malicious ads in web browsers etc, then it has to rely on vulnerabilities on the computer system in order to achieve its goal. If a computer or software is patched regularly and kept up-to-date, those vulnerabilities are fixed and cannot be exploited making the malware basically useless.

Those factors often hinder malware and therefore would technically hinder the malware's ability to learn and send back the data it learned. Because if the AV picked up the malware, it most likely quarantined and/or removed it which either way would prevent it from running and communicating with its command and control (C&C) server which hackers use to remotely control the malware. If the computer is patched then the vulnerability need to be exploited is no longer vulnerable and the malware will not be able to execute and therefore not be able to send useful data back to its C&C server. And if the system is patched and the computer as an up-to-date AV, then most likely a malicious email attachment may not execute properly and the malware still fails.

I'm not saying AI-based malware is impossible, I'm sure in the far future it may eventually become a reality but for now the closest thing to AI-based malware these days is the hacker learning from studying news articles and AV advancements, and exploiting vulnerabilities in the AI system to tamper with the learned data and make the system behave in a malicious manner. There has also been theories that vulnerable AI systems in the future may be used to actually assist malware with infecting systems but so far there has not been any real proof of concept with that theory.
 
A virus is malware (sorry I'm quite pedantic about that kinda stuff in my line of work lol). Although your concept is true, creating such malware may not be practical. The way AI improves itself is from learning. It has to undergo certain processes or experiences, and stores the data captured each time.

From that data it can improve itself and perform better. The security world is constantly kept on it toes with malware (especially after the WannaCry outbreak). As soon as a new malware is released security companies create signatures/definitions of the malicious files which they then send out to all users using their antivirus (that's why always keep your AV up-to-date), and the AV will use these definitions to detect the malware on computers and remediate accordingly. These definitions get updated by their respective vendors on a daily basis.

The other thing is if the malware does not rely on human error via spam emails with malicious attachments or malicious ads in web browsers etc, then it has to rely on vulnerabilities on the computer system in order to achieve its goal. If a computer or software is patched regularly and kept up-to-date, those vulnerabilities are fixed and cannot be exploited making the malware basically useless.

Those factors often hinder malware and therefore would technically hinder the malware's ability to learn and send back the data it learned. Because if the AV picked up the malware, it most likely quarantined and/or removed it which either way would prevent it from running and communicating with its command and control (C&C) server which hackers use to remotely control the malware. If the computer is patched then the vulnerability need to be exploited is no longer vulnerable and the malware will not be able to execute and therefore not be able to send useful data back to its C&C server. And if the system is patched and the computer as an up-to-date AV, then most likely a malicious email attachment may not execute properly and the malware still fails.

I'm not saying AI-based malware is impossible, I'm sure in the far future it may eventually become a reality but for now the closest thing to AI-based malware these days is the hacker learning from studying news articles and AV advancements, and exploiting vulnerabilities in the AI system to tamper with the learned data and make the system behave in a malicious manner. There has also been theories that vulnerable AI systems in the future may be used to actually assist malware with infecting systems but so far there has not been any real proof of concept with that theory.
This is a lot of info for my non tech mind but I think I understand that we’re ok for now ? This is not a threat in the moment?
 
  • Like
Reactions: Debi
This is a lot of info for my non tech mind but I think I understand that we’re ok for now ? This is not a threat in the moment?
Yea, we're okay for now. But I would be very wary of AI for a while until I know they have security built-in and their vulnerabilities are addressed.
 
  • Like
Reactions: Lynne and Debi