At my current place of employment we have a variety of TV’s mostly tracking market prices and what not. One is always on MSNBC all day. I just happened to catch the headline that read Elon Musk and Hawking fear AI or something like that. I have read multiple articles about both of them talking about how this invention could be one of the last we make if we are not careful.
I am by no means an expert on the subject like the two gentlemen I mentioned above, but I would consider myself somewhat of an enthusiast. I do not know all the ends and outs of how AI could come into being or potentially destroy us. I think the bigger question is why? Why would a supremely intelligent computer program, who would be light years smarter than us, want to kill us?
At first glance that seems dumb, of course they would want to kill us they would see us as a threat and want to get us out of the way so they could do whatever they wanted unimpeded. That is defiantly a possibility, however I do not think it is the most likely scenario. I honestly believe that it depends how the first AI is created. I have mentioned this before, but if it is strictly a computer program and all software I think the odds of it destroying us are greatly diminished. It would live in cyberspace and on the web if it got out, as I am sure it would be created on some remote location on a secure network. Us humans maintain that system and I think we could shut it down if we absolutely had to. So why would it destroy the ones that maintain the habitat it lives in. Now it could get angry or scared that we would pull the plug thus ending its life, but if it was contained in that secure system then it could do nothing. Now if it escaped to the web, there again, I do not think it would destroy us as we maintain the web.
The other scenario I was thinking about would be if the software were put into a android body that was able to move around and in essence make a duplicate of itself. If that were the case I think we would have a major problem for multiple reasons. If the AI android could replicate itself, like a von Neumann probe, it could quickly build an army or enough copies to do some serious damage. But once again I am not sure why it would want to destroy us, I think its first objective would be to secure a power source depending on what and how much energy its body consumed. Once it does that then I guess it would do whatever it wants, maybe its next objective would be to eliminate its competition or those that it would be competing with for resources, i.e. humans.
Of those two scenarios I think the first is highly more likely. We have come a long way in the field of robotics but making a fully functional self contained android body is many years away, at least I think so. There could also very well be a combination of both of the above scenarios. Imagine where the AI software escapes its confines and gets out on the web. I think it would be smart enough to build a body for itself from parts it was able to find on the internet. It could easily open a trading account make money and purchase the robotic parts it would need to create its body. It could also get someone to assemble the parts over the internet fairly easily, I would think. Now I know this scenario is also a little farfetched as well, but I think it is plausible as the below articles will show.
While this is kind of funny, at least I think so, it is also real as it actually happened. So I do not think it is that farfetched for a super artificial intelligence could accomplish this same feat. I guess the hard part would be assembling all the components for its body. I could easily see the AI creating a front and getting a team of scientists to do its dirty work. All things are possible with the internet!
Still factoring in all this, I still do not understand why it would want to destroy us. I can see why we should proceed with caution in going down this road but I cannot see why everyone things AI will turn on us and kill us. Although, I would think that it would be programmed with our mindset and we as humans are very scared and cautious creatures that have a general fear of the unknown. This area of science is jumping right into the deep end of the unknown swimming pool. Also I think the AI would be very caution and scared of us, but I think it would be able to see that destroying us may not be its best scenario.
Hawking and Musk are both infinitely smarter than I am, so I am not going to say they are wrong. I would love to talk more in depth with them to understand why they think this way and have this mindset. I think it would also be interesting to talk to them about where they see our future going, even if we do not accomplish creating AI. Below is an article that I find very interesting as it outlines the different possibilities that humanity could possibly obtain.
I do not buy any of these are real viable scenarios for the future of humanity, but they are still fun to think about.
I forgot to mention that while I was writing this post I sent an email to Hawking to try and see what he had to say about this issue. After I posted this I received a response back from Hawking, or one of his assistants I suppose. Below is the email I sent him along with the response:
Thank you for your email to Professor Hawking.
As you can imagine, Prof. Hawking receives many such every day. He very much regrets that due to the severe limitations he works under, and the enormous number of requests he receives, he is unable to compose a reply to every message, and we do not have the resources to deal with many of the specific scientific enquiries and theories we receive.
Please see the website http://www.hawking.org.uk for more
information about Professor Hawking, his life and his work.
Technical Assistant to
Professor S W Hawking CH CBE FRS
So not quite the response I was after but at least I got something back. One, Brian Greene, never responded the emails I sent him! In the words of Alex Trebek, “Boo hiss!” I’m kidding I know both these guy are incredibly busy, and neither has time for a nobody like me. Still though, boo hiss!