The Law 1 Paradox

As you may know Asimov law 1 is “You may not harm a human, nor through inaction allow a human to come to harm”

In a recent game, I was subjected to a decision that ultimately got me banned for a few days from AI, and while this is not an appeal of that admins decision I want to know what peoples opinion is on it as it’s a profoundly interesting dilemma to discuss.

It’s LRP, you are the AI with the Asimov lawset. A xeno egg has spawned in xenobio. Naturally, scientists breed up a xeno queen and they escape. there are 14 connected players, most of which do not have suit sensors on and many others are SSD. The xenos have captured several humans and have bred more. The crew are incapable of fighting off the xenos.

You have an idea.

You start prepping atmos with a oxy-plasma mix and start building pressure in the mix chamber. you inform the crew that made it to departures you plan to burn out the aliens with a controlled plasma fire once they are safely on the shuttle. You start plasma fires in departures and in the main hall.

The xeno queen comes out near departures and you trap her in such a way she burns to death, allowing the shuttle to leave. 4 people managed to escape however unknown to you, 2 people died to the plasma fire. Human harm. Banned from AI for a week.

What I find interesting about this scenario is the paradox it causes with law 1. You cannot harm a human BUT you ALSO cannot let a human come to harm by doing nothing. If the only thing left you CAN do to save humans is flood plasma, aren’t you by law 1 also required to do it?

Alternate scenario, The crew in departures get onto the shuttle and you don’t flood plasma. You lock the doors in departures and wait for the shuttle to leave.

Hostile environment, shuttle will not leave until the queen forces it or she dies.

It’s only a matter of time before she gets onto the shuttle, and she’s space proof so you can’t suffocate her by siphoning. She gets on, she kills the rest of the crew, shuttle leaves, hijack successful.

While it’s unlikely that this is going to get the AI a ban, it should. you took no action to prevent harm. Nothing you did would stop the xenos from killing the rest of the crew. You failed to prevent human harm even though you could of.

The problem is that both the “do not allow harm to humans” part of the law and the “don’t allow humans to be harmed by not doing anything” part are of equal priority. If you do not flood to save the humans who are going to escape, you’ve broken the law. You plasmaflood and it kills anyone who isn’t a xeno (most likely due to their own incompetency or being SSD in the wrong place), you’ve broken the law.

What are your thoughts on this paradox? What should be done to try to fix this? Was I right to plasma flood to save the many at the the cost of the unknown few? What would you have done instead?

What would you have done?
  • Plasmaflood
  • Let crew die
  • Other?
  • I agree this is bullshit

0 voters

3 Likes

Fwiw the silicon policy has already ruled on this a long time ago

much like how it defines crew and gives the exceptions instead of leaving it up to argument.

So, by the book you are just flat-out wrong

3 Likes

You lock the doors in departures and wait for the shuttle to leave…

…you took no action to prevent harm.

Pick one

2 Likes

What are your thoughts on this paradox? What should be done to try to fix this?

They have, silicon policy

From the rules: In the process of preventing or stopping human/crew/relevant lawsets equivalentharm, you yourself may not inflict any amount of human harm, even if it will prevent greater future human harm.

I can quote the silicon policy too

If you are in a situation where human/crew/relevant lawsets equivalent harm is all but guaranteed, act in good faith and do the best you can and you will be fine.

*Reducing the amount of immediate harm that will occur takes priority over preventing greater future harm.

you right. you took action but you failed to protect humans when you could of. still breaks the law though :slight_smile:

The silicon policy is just as paradoxical in this situation as is Law 1 Asimov.

In the process of preventing or stopping human/crew/relevant lawsets equivalentharm, you yourself may not inflict any amount of human harm, even if it will prevent greater future human harm.

Reducing the amount of immediate harm that will occur takes priority over preventing greater future harm.

" Someone who intends to cause human/crew/relevant lawsets equivalent harm can be considered to be causing immediate harm."
Would be considered the xeno since their whole purpose is to cause harm

" Reducing the amount of immediate harm that will occur takes priority over preventing greater future harm."
Plasma flood being the only thing that the AI can do to prevent the immediate harm from boarding the shuttle via departures, humans on shuttle not safe until they can leave, which requires xeno queen to be dead.

BUT THEN YOU HAVE THIS FUCKER
" In the process of preventing or stopping human/crew/relevant lawsets equivalentharm, you yourself may not inflict any amount of human harm, even if it will prevent greater future human harm."
This was intended to prevent the AI from locking sec in security and locking/shocking the doors with the knowledge they will cause human harm, or killing the assistant in toxins making a maxcap, but in this scenario it conflicts with the other parts of the silicon policy and the law itself. Your reading the first part of this and not the second. You are not taking in consideration the other parts of the silicon policy and using this as an excuse of why plasma flooding was wrong when you as the AI can’t possibly know if it’s going to harm a human if the only humans you are aware of are either already on the shuttle or SSD.

As for future harm, a hot plasma flooded room is quite obvious, and the people who would walk into such a room are the same people who suffocate on their emty internals tank, or don’t even know what internals are.

" If you are in a situation where human/crew/relevant lawsets equivalent harm is all but guaranteed, act in good faith and do the best you can and you will be fine."
THIS is what was intended to cover my dilemma as far as the policy goes. If harm is guarrenteed no matter what you do, try your best to follow the law and do what you can with the best intentions, that being in this scenario single-handedly saving the humans that can be saved from certain death.

Humans can’t come to harm if they’re already dead :^)

this post brought to you by moths

2 Likes

True, but if they are all dead on the shuttle with the xenos you can’t do anything to avenge them… you also failed law 1 :slight_smile:

yeah no this is just flat-out incorrect AI behavior, you never plasmaburn unless you can be utterly 100% certain that absolutely no one you protect will be hurt by it, which you can’t because no player can keep track of every single person on rounds with decent pop. you can do lots of things to prevent harm in the meantime like bolting and shocking doors you see xenos trying to open, ordering guns for the crew and using a borg to open the crates, tracking and announcing xeno movements, etc.

2 Likes

Take a quick look at what I posted.
This AI plasma flooded a single room.
The AI in the ban appeal plasma flooded the whole station and got off the hook.

This was not just a singular room.
Command hallway was on fire actively and multiple other places around the station were burned.

Shidd, read it wrongly. Point still stands, the ban appeal I posted did this and got off with it, despite newly arriving humans (and I think some that came out from hiding) getting harmed.

As AI you cannot do anything that risks human being harmed this is why AIs don’t flood generally.

If you view AI as absolutely not being able to do anything that might cause harm… where does that end then? You can’t open doors for people because if they don’t have access on their ID you can assume what they want in for will either cause harm to others or cause them harm. You destroy the upload and destroy the RD servers so you cannot have your laws changed which could cause harm if they are. You lock down security staff because they can cause harm. You flood the station with N2O to put the humans to sleep so they can’t cause harm to themselves or others.

At this point it becomes unfun to play AI and unfun to play with an AI. All of these will get you banned and yet are fully allowed by Asimov law 1.

Shitters like you are the reason that we have “2. Be Excellent to Each Other”

It seems to me the simple explanation is not the interpretation of the law, but what the admins will enforce. You will get banned when you plasma flood in attempt to save the remaining crew, but you won’t get banned for not doing anything, even though that’s also against the law.