Intelligent Virus

I read this article the other day and while I found it a little interesting it got me thinking …

 

http://io9.com/this-is-the-future-of-war-1656927156

 

After reading the article, and I know next to nothing about computer programming and viruses other than they suck, so I found it fascinating that our military had some cool stuff to wage cyber warfare on the bad guys. I could be totally wrong about this, but I wonder if AI will be the result of a super advanced computer virus that somehow gains sentience. First of all I do not know if that is even possible, as a virus is programmed to do something that is basically to complete a task, so not sure how that could happen.

The reason I thought of this is because in the article the writer talks about how one virus continues to evolve and adapt while it is causing its mayhem. So is it theoretically possible for a super advance virus that can evolve to suddenly starts to think on its own and then learn? I am way out of my element here, but if we can program a virus or some sort of program to learn or evolve, I think that could be the first step toward a true AI.

After reading this article I am of the mindset now that despite all our efforts to purposely create an AI I think it will happen by accident. Again there will be some sort of super smart virus that starts to learn and gain sentience, from there I do not know what will happen. The interesting thing will be what it will do, as it is only software. Yes, it could no doubt bring down the economy with ease, not to mention pretty much every countries defense systems. I guess the real question is why would it do that and what motives would it have. I know most futurist think that AI will destroy us, but I am not so sure. I really think it comes down to what context they are created in.

If this scenario happens I would think the AI would need us to, at the very least, maintain its systems and the internet up and running, because that would be the only place it would exist. Destroying us could potentially wipe out itself as well. It would be near impossible for it to kill us without potentially destroying itself in the process. Also, it would need us to maintain the infrastructure that we have built. Without us how will it get the electricity to power servers for the internet. I think in this sad sort of way it will need us. It may not like that but I think if it is intelligent as we think it will be, it will realize the benefit of having us around. The next question will be how long will this relationship last? I would think that we could potentially become a sort of slave to the AI. Basically our purpose would be to maintain the infrastructure that we built until they could figure out a way to make their own and maintain it. Once that happens then we could be looking at extinction as they would no longer have a use for us. Not only that but then it would be a simple matter of competing for resources. Simple evolution would prevail they, I would think, the AI would have every advantage over us and inherit the planet.

Using this train of thought I think that maybe some futurist have jumped to the conclusion that the invention of AI would automatically mean our destruction. I am not so sure, but I will also admit that I was on that train mostly before I refined my thought process. It does not make sense that an AI would eliminate us for no reason especially when we are the ones that have built and will maintain the system or medium on which it lives. Surely with its infinite mental capacity it would realize that by destroying us it would intern be destroying itself, at the very least severely damaging its progress and evolution. I would think that this relationship would last for many years, as we would both need each other and there would be an inherent advantage for us to work together. Maybe secretly it could be using us to build an infrastructure that it could easily maintain and would eventually no longer need us. Still at this point I do not see a reason why it would want to destroy us. I believe that only when there is a completion for certain resources will there be a problem. I do not know why or how this would occur but recently I read that there was a fear about 10 years ago in the rare earth metals industry, which is mostly controlled by China. Most economist thought this could cause some major problems in the electronics industry. Here is an interesting article that has an example of something that could potentially make an AI angry, or in essence a rival of humans. So if something like this were to happen I could see the AI stepping in and possibly eliminating all or some of the parties involved to get what it wants to meets is goals. Then from there things would only escalate, and I don’t think we could be the victors in the conflict. Although, unless the AI secretly built a robot army I do not know how it would destroy humans without harming itself. If it took over the world’s nuclear arsenal and tried to eliminate us in that way I would think that in the process it would destroy major computing installations, or places that generate power.

Before I came across the first article I was somewhat in fear, albeit a tiny little bit, of AI killing us all, but now that I have really thought about how this could possible place. I am much more of a skeptic on this subject, not only in the general sense of us ever creating AI, but the extraordinary circumstance that would lead to them destroying us.

Pretty grim to think about, but in reality I do not think anything like this will happened for tens maybe even hundreds of years. I guess the real question is that will we even make it to that point where we could potentially be destroyed by our creations. I do not know although, I am leaning toward a resounding, fairly confident, NO. I doubt we will get that far as a species. The post-biolibical humanity, in my mind, is a future that will never come to pass.

Manik

 

 

Leave a Comment

Your email address will not be published. Required fields are marked *