Recently I read a book about artificial intelligence, not necessarily the science behind it, but more of the implications of what could or would happen if an AI were to come online. It was extremely interesting and one of the talking points that was brought up in the book is, would an AI need a body to fully understand our reality? Basically is there something about living, having a body, in the physical world that helps us understand reality or does it not make a difference and could an AI emerge from a computer to a sort of android body and be fully familiar with the real physical world needing no experience to understand the real world. This is what is known as the Embodiment Debate.
I had not really thought about this problem unit I read Our Final Invention and I think it could be a legitimate argument. We have no idea if it matters one way or the other, but if this is a real problem then I think AI my never understand its creators. There are a couple of schools of thought that I want to go over and depending on which one(s) come to pass this may be a non-issue, or it could be a giant problem.
The first is that yes, AI would need a body and be able to move about in the real world to fully understand and experience it. This could very well be the case, although I am not fully convinced as I will talk about in a minute. But I could also see where it could certainly help to give AI a better understanding of us and more experience of what we and our society is like. There are all things that I think would be difficult to program and would be better understood given real world examples. Although this is easier said than done. We are a ways away from building AI, but I would say we are, perhaps, even farther away from creating some kind of a true robotic body. I have seen some examples that prove that we are heading in the right direction but nothing that would give something the real full experience of living in the real world. That is key and I think we are decades maybe even longer from that technology. But
The other idea is that no “real world” experience would be needed, the AI could learn expertly everything it would need from the internet or another database. I am not 100% convinced that this will be the case either, as there are just some things that need to be experienced and I am also not sure it, an AI, would be able to pick up on certain human traits just by watching them. As always we have no idea one way or the other, but honestly I think these examples are to extreme and neither will yield an AI that will be more like us than not. Which I think is the overall goal, maybe not so much to create something just like us, but to get it to understand who and what we are. That is going to be the key and I have a hard time imagining that that can be programmed.
I think the best way to approach this would be to create a simulated environment for the AI, this way it could potentially interact with real people and learn that way. This kind of gets into a whole other can of worms, but let’s just assume for the sake of argument that the AI cannot figure out that it is in a simulation and that the simulation is indistinguishable from the “real” world. Well let me back up, let’s not say indistinguishable, let’s say that it is a damn good simulation pretty close to the “real” world. While in this simulation the programmers could throw all sorts of scenarios at the AI and try to help it understand humans and how we interact and what not. Again these things would be difficult to program, how do you program love, or emotional pain, perhaps get the AI to understand the pain of a loved on dying? All of these issues could potentially be solved with a simulation. Then eventually when the programmers feel the AI has learned enough they could explain to it that it is in a simulated world. This is where things could get tricky as it could use all its newly understood emotion and what not and not want to leave. That would be a problem I guess, but I am not sure what would be the issue of just leaving the AI there. But perhaps as it grew smarter it would realize that it was not in the real world and long to experience the real world.
Creating a simulated environment may not be possible or contusive. Given our limited robotics technology it could be more beneficial to create an AI then put it in a body and slowly as our technology gets better then upgrade it, that way we can have more control over it at first as much of its actions will be extremely limited. We could also slowly teach it about certain things and try to get it to understand the real world much like teaching a child. Even at this point if it was extremely intelligent it would be somewhat trapped and because of the limited robotics technology could not do much damage or escape. It would basically be a prisoner, which may not do much for gaining trust, but we have to start somewhere.
If I have to give a definite answer I have to say that I think having a body and being able to experience the real world can only help in getting an AI to understand us. I think on some level this could be one of the key components to preventing an AI catastrophe that so many people are afraid of. After all seeing pictures of the Sistine Chapel are amazing, but walking into the chapel and looking up with your own eyes and seeing if for yourself is absolutely breathtaking.