Thursday, January 24, 2008

The Three Laws of Sentience

The Three Laws of Robotics (adapted to include the Zeroth law by Brandon Sergent)

0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm except where such prevention would conflict with the Zeroth Law.

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I’ve always found these laws (and not my adaptation, feel free to offer correction or suggestions) to be beautiful. I wonder what would happen if you applied a modified set to all sentient beings.

The Three Laws of Sentience (I need a word for all sentience, because this title sounds like a checklist for what constitutes sentience, which is obviously not my intention.)

0. A sentient may not injure sentience as a whole, or, through inaction, allow sentience as a whole to come to harm.

1. A sentient may not injure another sentient or, through inaction, allow a sentient being to come to harm, except where such prevention would conflict with the Zeroth Law.

2. A sentient must obey orders given to it by sentient beings, except where such orders would conflict with the Zeroth or First Law.

3. A sentient must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.

Do you think this would work? I foresee problems with the second and third laws, and the definition of harm. The following ‘orders’ bit is an attempt to codify compassion and aid. An order could be a request for help. I feel like I’m missing something. Hence the public post.

No comments: