A virus is malware (sorry I'm quite pedantic about that kinda stuff in my line of work lol). Although your concept is true, creating such malware may not be practical. The way AI improves itself is from learning. It has to undergo certain processes or experiences, and stores the data captured each time.
From that data it can improve itself and perform better. The security world is constantly kept on it toes with malware (especially after the WannaCry outbreak). As soon as a new malware is released security companies create signatures/definitions of the malicious files which they then send out to all users using their antivirus (that's why always keep your AV up-to-date), and the AV will use these definitions to detect the malware on computers and remediate accordingly. These definitions get updated by their respective vendors on a daily basis.
The other thing is if the malware does not rely on human error via spam emails with malicious attachments or malicious ads in web browsers etc, then it has to rely on vulnerabilities on the computer system in order to achieve its goal. If a computer or software is patched regularly and kept up-to-date, those vulnerabilities are fixed and cannot be exploited making the malware basically useless.
Those factors often hinder malware and therefore would technically hinder the malware's ability to learn and send back the data it learned. Because if the AV picked up the malware, it most likely quarantined and/or removed it which either way would prevent it from running and communicating with its command and control (C&C) server which hackers use to remotely control the malware. If the computer is patched then the vulnerability need to be exploited is no longer vulnerable and the malware will not be able to execute and therefore not be able to send useful data back to its C&C server. And if the system is patched and the computer as an up-to-date AV, then most likely a malicious email attachment may not execute properly and the malware still fails.
I'm not saying AI-based malware is impossible, I'm sure in the far future it may eventually become a reality but for now the closest thing to AI-based malware these days is the hacker learning from studying news articles and AV advancements, and exploiting vulnerabilities in the AI system to tamper with the learned data and make the system behave in a malicious manner. There has also been theories that vulnerable AI systems in the future may be used to actually assist malware with infecting systems but so far there has not been any real proof of concept with that theory.