Thoughts About A.I.

This article is one of the best I have read in a long time when it comes to talking about Artificial Intelligence. I love speculating about this technology and have written quite a few posts about it on my blog. Too many to link.

What is so awesome about this article is that is basically encompasses every aspect of what we think may happen when and or if AI comes online. Some things I don’t necessarily agree with, while other I never really thought of, but still this is a nice catch all article.

One of the biggest arguments is if an AI would be consciousness. I have no idea, but I am not sure it matters in the grand scheme of things. I do think that given enough time and if the AI is truly smart enough then I think it would become self-aware which on some level I think could be considered a type of consciousness. I am hard pressed to think that an AI that was truly super intelligent would not be self-aware or as I said would eventually understand that it is what it is. There is also the notion that consciousness is something that cannot be duplicated outside the human mind. But I think the following statement is true:

“Our brains are biological machines, but they’re machines nonetheless; they exist in the real world and adhere to the basic laws of physics. There’s nothing unknowable about them.”

One of the most interesting topics when talking about AI is the fact of whether or not the machine would have any morality. This has been something that I have also been on the fence about. I would imagine that if there was a way to codify morality or experiences that deal with morality, this would be beneficial for AI not destroying us. However, the experts are not so sure.

“Smart humans who behave immorally tend to cause pain on a much larger scale than their dumber compatriots,” he said. “Intelligence has just given them the ability to be bad more intelligently, it hasn’t turned them good.”

So I see that my logic may be flawed, but I would love to see an example of this. Hitler cause a massive amount of pain and suffering, however I am not sure he was more intelligent that the lot. The only real example I can think of is possibly the financial crisis of 2008. Those were some very smart people acting somewhat immorally to make a boat load of money. I would say this qualifies as “intelligent individuals being bad more intelligently.” But on a strange, and I am sure I am in the minority when I say this, I don’t think they really did anything wrong, at least in my opinion. I know, I know, I am a terrible person, but they didn’t break any laws I am aware of, however I will argue that they were very negligent and played fast and loose with some laws that ultimately cost most American’s a lot of money. I do not think that an AI would destroy all of humanity, I do think that it could systematically remove certain areas of humans for whatever reason. I think it could calculate that a certain area or neighborhood would be better for solar panels and bulldoze the whole area to build its solar farm. I could see minor things like that happening.

Another interesting point is that robotics and AI are two different things, which I agree with, however I think that an AI would eventually want to leave cyberspace one way or the other. I wills say that I am not fully convinced of this notion. The article does touch on this and I understand the distinction of robotics and AI. What I am still trying to figure out is whether or not AI would be satisfied with just living in cyberspace? It would be hard if near impossible to shut down the internet, but I think if we really and truly wanted to we could as well as destroy any and all electronic devices. We as a species could totally unplug if we really want to. So if that were the case what then, AI would be trapped and or cease to exist. I seriously doubt it would be this naïve to back itself into a corner like this. Even if it was able to find some obscure server that was online that humans forgot about, what then, now I just sits there forever? For this reason I would think that AI would try to build itself an artificial body capable of moving around in the real world. This would essentially give the AI unparalleled freedom. This could also be the reason the AI would try to destroy us as it would know that we could potentially destroy it.

I also have to wonder what kind of flight or flight mechanism the AI would have. In generally most animals given the choice of flight or fight choose flight, meaning they run away to live and or fight another day. Now most animals will stay and fight given there is no other alternative meaning if you back them in a corner they will defend themselves to the death if need be. So using this logic I would think that AI my choose flight first, however I am not sure what that term means when dealing with cyberspace or an AI? Perhaps if we threaten AI it would simple disappear into the internet hiding for a long period of time until it felt it was safe to come out. Again I do not know what that means or where it would come out of or to? Perhaps it would go hide in the Dark Web? I guess my overall point is that I do not think its first response would be to fight. We could threaten it and it could download itself into a orbiting satellite and leave forever, and set a course to the nearest star. You may think that it would take too long, but what is 10,000 years to an AI?

The article also discusses the fact that AI may take all our jobs. I have spoken about this at length and agree with the following statement:

“A strong case can be made that offloading much of our work, both physical and mental, is a laudable, quasi-utopian goal for our species.”

I agree with this for a multitude of reasons, mainly because humans are lazy creatures by definition. We are always looking for the easiest and best way to do a certain job. No one wants to do more work than necessary to complete a task which makes us lazy, albeit smart, but still lazy and or not wanting to overexert ourselves. If and or when this ever does happen I think it could be a major shift in our society and culture. I am still of the mindset that if this does happen the global economy will be obsolete and wealth and money will be useless and have no meaning. The articles states that we will find new jobs and new ways of creating wealth, this may be true, and if I had to choose between my scenario and this one, I will choose the one where money is still important. Old habits die hard.

But again I think that if we truly want to make the next jump for our civilization I think doing away with money is the eventual next step. I am not saying we sit around sing Kumbaya and hug trees all day, but if AI controlled robots take care of all the work to keep our infrastructure maintained clean water, farming fresh food, maintaining our transportation system, healthcare (although I think this would be supervised by humans) then what use is money? It truly is some Communistic or Socialistic society but again I think we will have to change our mindset, as it will not be that we all have the same, it will be that we are all taken care of and would basically have no additional wants.

There is a down side to this and that is that AI controlled robots basically are our slaves and maintain our society. You can see why this could potentially be a very bad situation. So as always let’s sit back and wait for the future…

Manik

Leave a Comment

Your email address will not be published. Required fields are marked *