The AI Condition

Many of my posts have talked about AI in various aspects. I certainly have my opinions about how I think this could play out if possible. I certainly think AI is possible, but I think despite what Musk or Kurtzwiel says the truth lies somewhere in-between.

The reason I mentioned the above individuals is because they are both smarter than me, but at extreme ends of the spectrum. Musk is sure that AI, if we create it, will certainly destroy us, like he is extremely sure that this will happen. Then you have Kurtzwiel who thinks that AI will be the savior of the human race. As I said above, and with most things, I think the truth lies somewhere in the middle.

Personally, I do not think AI will destroy us for whatever reason, that does not make sense to me. Plus I also believe that the AI we have seen on so many different television shows and movies will not come to pass. I do not think that type of AI will come online for at least 100 or more years. That is just my belief and I don’t have much to back that up other than when it comes to technology you always have those that say it is just around the corner, then the opposite side is that it is either impossible or not possible for hundreds of years. Kurtzwiel claims that in the next 40 years or so we will have real AI. I am not so sure of that prediction. That is why I like my prediction of 100 years.

So with all that being said what am I jabbering about, well I recently came across this idea in a comic I was reading. The idea was a sort of an AI Condition, which is the machine equivalent of the Human Condition. For us humans the Human Condition is basically the stuff that makes you and I human. Our daily struggle to make lives more enjoyable and doing all this knowing that at some day we will die. Now that is just scratching the surface of the Human Condition but with that being said what would the AI Condition consist of?

To talk about this point I more or less have to contradict one of my beliefs about AI. I believe that AI will never be like what appears in Ex Machina or Chappie  or any other number of AI that has appeared in movies or literature. I honestly do not know if that type of AI will ever be real. However, I am of the mindset that Moore’s Law will hold true and that this could be possible. Technology is the great unknown for any argument like this. So while I have my beliefs, technology will be the ultimate equalizer that could change all our predictions.

So let’s suppose that we can create AI that will be able to learn and could potentially be human like in terms of interacting with us. This is one thing that I have somewhat overlooked when discussing AI, and that is the ethics of the whole situation. I usually do not care much about ethics in terms of technology, but again if we can truly create an AI, should we? I had never really thought about this, I was just of the mindset that yes we should do it regardless of the mindset that it will destroy us. When talking about the ethics of AI I am not speaking of the idea of not creating something that will destroy us, I am speaking more in terms of what it would mean for the psyche of the AI.

The reason I am asking this question is more of a philosophical and or behavioral one. If we create AI it will be the only one of its kind, that we know of, and will have nothing else to relate to or connect with in terms of bonding. We can teach it human ways and how we interact with one another, but at the end of the day it is not human. I think it will know this and really struggle with what it is, much like adolescent humans. That is a period when humans try to find their place in the world and what is and what is not acceptable in terms of behavior and society. Imagine if you were the only one of your kind and you had no real friends to relate to, the only “friends” you had were not even the same species as you. In fact your “friends” were flesh and blood beings whereas you are made of metal, not to mention they are alive and you are not, well that is another point that could be argued. I think a true AI would struggle with these ideas and while we think AI would save us, we could just create a severely depressed incredibly smart machine. That is something no one has really talked about, at least that I have seen.

I will reiterate that if we are able to create a true AI that can think, feel, understand, and express human emotions among many other things, then I think we also have to consider that it could get depressed. If it becomes depressed what them, do we put it out of its own misery? Also if it were super intelligent perhaps it could overcome these complex questions more quickly, but maybe not. What would be do if we created a suicidal AI, one that could not understand its purpose, not that humans have one, but in the end just wanted to kill itself.

For me I think these are conversations worth having, but in all honestly for the time being it is mostly a mute argument. The possibility of a real true AI will have to be in the works, so while I think this is an important conversation, it is perhaps 50 or so years too early. O well, its never to early to start thinking about the future.

Manik

Leave a Comment

Your email address will not be published. Required fields are marked *