I came across this article and at first didn’t think much of it but as I started to think about it more and more it really struck a chord with me.
I have always been of the mindset, which was rather narrow minded I might add, that if we invent or create AI that it would more or less want to help us and not think twice about it. More or less it would be our sort of a slave if you will, working for free, for the betterment of mankind. Now there is the other side of this argument as well that I am sure most are aware of, and that is the idea that once we create AI it will become super intelligent and for some reason want to then destroy us, mainly because it could see us as a threat or for reasons unknown. While these are obviously not the only schools of thought they are possibly the most popular.
In my personal opinion I think that if we ever do create a true AI that we will be able to teach it to be like us and respect life and humans. However, that does not mean that we will never have a difference of opinion somewhere down the road. So I think potentially there could be a point where humans and this AI could have a war, but I am not sure that is the most likely scenario. I think what is most likely to happen if we do create AI is that we will live in relative peace until we die out as a species and all that is left of us is the AI that we created. Basically our creations out live us, which could be the natural way or some sort of natural law of the universe, we just don’t know it yet. I will also say that we could very well go extinct because of a AI human war. I think it is a plausible scenario but I still have a tough time saying that the AI will want to wipe us out right off the bat, even if it reaches super intelligence. I would imagine it would have more important things to do than worry about us, unless we gave it a reason to fear or see us as threat. For the sake of argument let’s say that it does not want to destroy us, anyways moving on…
In sifi there are two kinds of AI. The cold, calculated, unemotional, AI that always relies on numbers and calculations, and then there is the human-esque AI that mimics human emotion thoughts and feelings in almost every way (like David in Prometheus). I think depending on which sort of AI we create will largely influence our future. If the first kind is created then I think we are in for some trouble, but if the second kind is created then who knows what will happen. I am sure there is also a future where both will be created and serve their various purposes.
With all that being said there is an idea that I never thought about when it came to AI, and that is what if the AI we created is unmotivated, just plain lazy, or is more concerned with pleasure, either though drugs or sex, to help our civilization in any way. That may seem strange, and I certainly think it is, but I think it is yet another plausible scenario of what could happen, and I think would be more associated with the second AI mentioned above. I think for some reason we assume that a super intelligent AI will want to learn and problem solve and be productive to help us. I could see where it could also be the exact opposite, it could still be super intelligent just have no motivation to use its ability to help us or anyone for that matter. I guess we could program it to help us but if it were super intelligent I would imagine it could reprogram itself to do what it wants, or somehow circumnavigate the built in logic. So what is to stop it from experimenting with drugs, sex, and various other forms of pleasure. Even if the first AI mentioned above was created it could thought some bizarre logic deduce that only pleasure matters in its existence and follow that directive to its ultimate end, seeking pleasure above all else.
This is where things get a little tricky. How could a computer program become an addict? Well I think if it was true AI and could think and more or less act human, then it would have most if not all of our traits both good and bad, including addiction. The problem is that I do not know of a computer program or virus that will get an AI “high.” But as the article mentions the AI could write something for this purpose. So while a drug addiction may be somewhat farfetched there could very easily be a porn addition, believe it or not there is a lot of porn on the internet. I know I was shocked as well.
Imagine an AI not concerned with anything else other than watching porn. Sounds quite comical if you ask me, but I think there could be some truth to it. Again if the AI did have our qualities then I think it would want to find pleasurable experiences as we do. No one wants to be miserable, or at least I don’t, so I would think that the AI would also like to have fun or seek out pleasurable experiences among other things, and not necessarily be concerned with helping humans.
I know these are only movies but I think Chappie and Ex Machina could be what bringing an AI to online would be like. These childlike things that must be taught and have to figure out the world for themselves. For some reason, and I am guilty of this as well, we have this idea that once we create AI our problems will all be solved. The AI will help us create and solve so many problems that are plaguing mankind. While this could happen I think that the AI whatever it is would be more concerned with its own agenda. I would imagine at first it would want to learn then begin to try out things for itself, experience life and just do whatever. To think it would be ok with being our slave more or less I think is a fallacy. I obviously don’t know that much about it, but I am hard-pressed to understand how a free thinking machine with all our human qualities would be ok with serving us?
I keep coming back to the idea of an AI without emotion or feelings or anything of that nature and I am still not sure that something like that would be considered a true AI. In my opinion I do not think a true AI would be without these qualities. So when scientist talk about creating AI I am not sure what they are really after? So I guess after all this rambling is not whether an AI would get high it is which type of AI are we on the verge of creating?
In the meantime, I think we are still quite a ways away from anything I just talked about, which is a little disheartening I suppose.