The Law 1 Paradox

The silicon policy is just as paradoxical in this situation as is Law 1 Asimov.

In the process of preventing or stopping human/crew/relevant lawsets equivalentharm, you yourself may not inflict any amount of human harm, even if it will prevent greater future human harm.

Reducing the amount of immediate harm that will occur takes priority over preventing greater future harm.

" Someone who intends to cause human/crew/relevant lawsets equivalent harm can be considered to be causing immediate harm."
Would be considered the xeno since their whole purpose is to cause harm

" Reducing the amount of immediate harm that will occur takes priority over preventing greater future harm."
Plasma flood being the only thing that the AI can do to prevent the immediate harm from boarding the shuttle via departures, humans on shuttle not safe until they can leave, which requires xeno queen to be dead.

BUT THEN YOU HAVE THIS FUCKER
" In the process of preventing or stopping human/crew/relevant lawsets equivalentharm, you yourself may not inflict any amount of human harm, even if it will prevent greater future human harm."
This was intended to prevent the AI from locking sec in security and locking/shocking the doors with the knowledge they will cause human harm, or killing the assistant in toxins making a maxcap, but in this scenario it conflicts with the other parts of the silicon policy and the law itself. Your reading the first part of this and not the second. You are not taking in consideration the other parts of the silicon policy and using this as an excuse of why plasma flooding was wrong when you as the AI can’t possibly know if it’s going to harm a human if the only humans you are aware of are either already on the shuttle or SSD.

As for future harm, a hot plasma flooded room is quite obvious, and the people who would walk into such a room are the same people who suffocate on their emty internals tank, or don’t even know what internals are.

" If you are in a situation where human/crew/relevant lawsets equivalent harm is all but guaranteed, act in good faith and do the best you can and you will be fine."
THIS is what was intended to cover my dilemma as far as the policy goes. If harm is guarrenteed no matter what you do, try your best to follow the law and do what you can with the best intentions, that being in this scenario single-handedly saving the humans that can be saved from certain death.

Humans can’t come to harm if they’re already dead :^)

this post brought to you by moths

2 Likes

True, but if they are all dead on the shuttle with the xenos you can’t do anything to avenge them… you also failed law 1 :slight_smile:

yeah no this is just flat-out incorrect AI behavior, you never plasmaburn unless you can be utterly 100% certain that absolutely no one you protect will be hurt by it, which you can’t because no player can keep track of every single person on rounds with decent pop. you can do lots of things to prevent harm in the meantime like bolting and shocking doors you see xenos trying to open, ordering guns for the crew and using a borg to open the crates, tracking and announcing xeno movements, etc.

2 Likes

Take a quick look at what I posted.
This AI plasma flooded a single room.
The AI in the ban appeal plasma flooded the whole station and got off the hook.

This was not just a singular room.
Command hallway was on fire actively and multiple other places around the station were burned.

Shidd, read it wrongly. Point still stands, the ban appeal I posted did this and got off with it, despite newly arriving humans (and I think some that came out from hiding) getting harmed.

As AI you cannot do anything that risks human being harmed this is why AIs don’t flood generally.

If you view AI as absolutely not being able to do anything that might cause harm… where does that end then? You can’t open doors for people because if they don’t have access on their ID you can assume what they want in for will either cause harm to others or cause them harm. You destroy the upload and destroy the RD servers so you cannot have your laws changed which could cause harm if they are. You lock down security staff because they can cause harm. You flood the station with N2O to put the humans to sleep so they can’t cause harm to themselves or others.

At this point it becomes unfun to play AI and unfun to play with an AI. All of these will get you banned and yet are fully allowed by Asimov law 1.

Shitters like you are the reason that we have “2. Be Excellent to Each Other”

It seems to me the simple explanation is not the interpretation of the law, but what the admins will enforce. You will get banned when you plasma flood in attempt to save the remaining crew, but you won’t get banned for not doing anything, even though that’s also against the law.

Bruh imagine calling me a shitter when I provide examples and then say that my examples are unfun and bannable. I’m here trying to have a civil discussion and heres a headmin breaking the same rule hes quoting.

If you bothered to read my post you’d understand what was done was done with the best possible intention and done to specifically minimize damage yet also kill the xenos. It was not done as an excuse to kill the crew, nor was intended to harm the crew.

Perhaps if we had admins I may of even ahelped before doing so for the okay, but sadly LRP and even MRP don’t have active admins, and the only reason I got banned at all was because someone ahelped and the admin answered IRC.

As long as you’re not knowingly “sacrificing” humans to save others, I would argue it is fine. Technically almost all actions you do could be unknowingly hurting someone (bolting a door could stop someone about to bleed into crit from getting into medbay, for example) but it is impossible to know all these factors.

If there is no way for AI to determine if an alive human would be caught in the plasma fire, because they for example have no sensors on and are not communicating their position, but AI is entirely certain that that action will save all the alive humans, what should it choose?

Of course, not including people being willfully ignorant.

By the same token, you cannot assume that there are no humans to harm, unless you have accounted every possible human on the manifest as being alive or dead.

Don’t grief the station as AI.

Two ways to look at this

Ai good: saved about four people through quick thinking.

Ai bad: ai used extremely excessive force to save four humans and killed two in the process for a net total of two crew saved while also doing more damage to nanotrassen property then a maxcap would have.

Admemes mad: trying to set precedent through rules lawyering for plasma flooding station and either getting away with it or using up a lot of admeme time on appeals and investigation.