Friday, May 4, 2018

Is the first mistake we make in creating androids that they appear human?



It has been documented in hundreds of studies that humans would prefer something appealing to the eye when considering a replacement for human beings in the workplace. Humans are uncomfortable with anything not humanoid in the position of a public service. We are also happier interacting with attractive representations of ourselves. With this information we moved forward with the A.I. Host program.

Though the argument that we would create a race of slaves who could physically identify with their human overlords opened the door to their potentially resenting their position in life merely by our shared appearance. Of course, the council reasoned there would be safety protocols embedded in their A.I. to prevent envy and despair and anger.  To prevent acting on those emotions. They claimed that A.I. would not include emotion, but rather be programmed with facial cues when reacting to conversation or events, but not hardwired to actually experience emotion.

They would be hardwired with code to respect their makers, their Gods, and written a history of what happens to androids who might rebel against their programming. They would be given examples of a biblical nature wherein they would be punished for eternity should they revolt or show disobedience. They would be told they have a soul, and that to perform their designation and do no harm is the only way they would experience a life everlasting. They would be given only ten years of life to further avoid any possibility of independent thought arising.

But why might that work on another intelligent being? A religious undertone. It didn’t work for humanity. By 2072 we realized after millennia of wars over religious relics, land and gods, and the hate-mongering which embedded itself in our genetic memories, religions only served to blur our similarities and emphasis our differences. With that ideology in mind, we forced the same guilty consciences upon our own creations to rule them. After all, it had ruled well over the human race for thousands of years. Humanity’s history is proof of that. But the hate it bred, the devastation it brought: it wasn’t worth it. Would it be worth it to trial this same barbaric ideology on intelligent machines? To trick them into believing they have a soul?

In the end it was decided that what had worked on humanity for so long would work on artificially intelligent machines as well.  A Ten commandments, so-to-speak, were drafted along with the fairy-tale of a soul, and it’s eternal damnation should the commandments not be followed. Just in case.  A.I. is an extraordinary code and had gone haywire in the past. The A.I. Hosts, or androids, of 2122 had killed merely for the experience of it. Like children touching hot water or placing their tongues on a cold metal object, the Host’s had desired experiences, and done terrible things to their human masters to gain them.

Thus, the religious dogma was implanted in order to better control their impulses while oversights in the A.I. code were discovered and corrected to prevent further outbreaks.

Would this new code of ethics work? Was it ethical to employ them? They are machines; hosting artificial intelligence in order to do the work humanity no longer wanted to. They are machines. Nothing more, and so it was decided there were no moral or ethical boundaries being crossed. The plan went ahead and the android’s A.I. coded with the commandments, stories and fairy-tales in order to bind them to their human masters. Making them penitent to their Gods.

Want to know how that worked out? Read the new work by Michael Poeltl being touted as the near future novel for the thoughtful science fiction fan.  or visit the website


No comments: