Saturday, May 23, 2009

The Measure of Life

This is a paper I wrote for my Science, Technology, and Values course as part of my undergraduate degree from Rochester Institute of Technology.

Being a computer engineering student and a big fan of Star Trek, I am fascinated by the character Lt. Commander Data, who is an android. During several "Star Trek: The Next Generation" episodes, more than I could count, Data expresses desire to become more human. In one particular episode, Data's cooperation is sought in order that he take part in an experiment that would involve taking him apart for study. Some guy shows up on the Enterprise and has transfer orders ready for Commander Data so that he might be transferred to his lab and be shut down for study and experimentation, a process that could cause Data much harm, especially since this guy does not fully understand the work of Dr. Sung, Data's creator. Needless to say, Data does not want to participate and decides to resign so that he may avoid a transfer of duty; at this time his rights are questioned and he is put on trial to determine whether he is the property of Starfleet or an individual with his own right to choose. This raises several very important questions. What is life? What defines a sentient being? On the show, a sentient being was defined as one with the following three qualities: intelligence, self-awareness and consciousness. Under that definition, it would be very difficult to distinguish between human intelligence and artificial intelligence. So do artificially intelligent systems have rights? As our technological abilities broaden, this may soon be a necessary topic of discussion.

In some way or another, all life is created. That cannot be disputed. Children are born every day and are the progeny of their parents; and therefore are created by their parents. Children are created by their parents. Seeds are planted and flowers grow, now plants may be considered alive, but they are not considered sentient beings because they are not considered intelligent and conscious. But what about machines? By way of elaborate, to say the least, and extensive programming, artificially intelligent machines can display intelligence for surpassing that of mere mortals; Data demonstrates that in just about every episode. So obviously intelligence cannot be an issue here.

But what is the issue here? The issue is deciding where to draw the line on life. If I were to design a machine that would think for itself, reason and understand logic, would that machine be a life form? That, I don't think so, several years ago I wrote a program that played checkers, the computer did have a "thinking" algorithm in which it "looked" at the game board and "thought" about the consequences of moved and then "decided" on which one was best. The reason most of the verbs in that sentence were in quotes is because the computer wasn't actually thinking, the computer was simple going through a series of statements that evaluated the moves and made the best one based on a set of rules that were coded into the program. This is not artificial intelligence, it is simply programming. Artificial intelligence requires that knowledge and understanding be acquired through means other than what is hard-coded into the programming. In order for a machine to be artificially intelligent, it has to learn.

Learning is the key. Even though I have not personally taken part in any artificial intelligence projects it doesn't mean that they are not in abundance. AI seems to be the wave of the future, but suppose this: Lets say a group of engineers were working on an AI project and one day made this huge discovery; they noticed that their project, HAL (for lack of a more original name) learned something today!!! Now, it is right for them to flip the switch at the end of the day and erase HAL's memory? Does HAL have a right to life, does HAL have a right to retain his memory and decide for himself what he wants to do with that memory. I say sure, if HAL is intelligent enough to understand the consequences of the actions of the engineers when they flip the switch, then he should have the right to decide whether that switch should be flipped. This question was raised during Data's trial. The Captain, Data's defense counselor, asked Data whether he knew what the potential consequences of the trial were. Data emphatically answered that he was aware that his life was on the line. He understood that the data in his memory could be stored elsewhere and restored is his memory was somehow erased. But his fear was that he would lose his experiences, which are what define his life, not just the information that he has stored in his memory banks. To Data, this is what death would be like, having to start over and gain new experiences; I would say that this demonstrates self-awareness.

Part of the Enterprise's continuing mission is to "seek out new life and new civilizations," if we were to someday have the technological capabilities to create an android such as Data, and we created several thousand of them, would we have created a new civilization, a new race? I would say so. What is a race, if nothing more than a collection of beings with similar characteristics? And if this new race, with self-awareness and intelligence, was not allowed to think for themselves and decide on their own fate, what would we have created? It seems to me that we would have created a race of slaves and, correct me if I'm wrong, didn't Abe Lincoln sign something back in the 1860's or so that prohibited that in this country? So what do we do?

Should we just crank out these androids and set them loose, I'm mean why create an android if you can't get anything out of it because he's got to have a life of his own? Ah, ha! Back to the "creating life" issue. My parents created me and now I can pretty much do what I want, my life is my own, but that wasn't always the case. I can remember times that I absolutely had to cut the lawn or I couldn't go swimming at John's house. Is that a form of slavery? No, it's not because these were my parents, they created me and I would someday be able to go off on my own and create my own children (with a lot of help from my better half) and then I'll have intelligent, self-conscious beings around to somewhat do my bidding. So what does this have to do with androids? Well, how about this: You create an android and you get to keep him around for a while to make sure his circuits are not going to malfunction or his positronic matrix is stable, or whatever. Then, he is free to go off on his own and join the military if he wants to. Basically, you raise your droid and then let him live his own life. Life, that's the real issue here, letting an intelligent, self-aware being live his own life. I may not approve of the lives of everyone around me, but I cannot deny that their life exists and besides, who am I to define the measure of life?

No comments: