The Law 1 Paradox

Bruh imagine calling me a shitter when I provide examples and then say that my examples are unfun and bannable. I’m here trying to have a civil discussion and heres a headmin breaking the same rule hes quoting.

If you bothered to read my post you’d understand what was done was done with the best possible intention and done to specifically minimize damage yet also kill the xenos. It was not done as an excuse to kill the crew, nor was intended to harm the crew.

Perhaps if we had admins I may of even ahelped before doing so for the okay, but sadly LRP and even MRP don’t have active admins, and the only reason I got banned at all was because someone ahelped and the admin answered IRC.

As long as you’re not knowingly “sacrificing” humans to save others, I would argue it is fine. Technically almost all actions you do could be unknowingly hurting someone (bolting a door could stop someone about to bleed into crit from getting into medbay, for example) but it is impossible to know all these factors.

If there is no way for AI to determine if an alive human would be caught in the plasma fire, because they for example have no sensors on and are not communicating their position, but AI is entirely certain that that action will save all the alive humans, what should it choose?

Of course, not including people being willfully ignorant.

By the same token, you cannot assume that there are no humans to harm, unless you have accounted every possible human on the manifest as being alive or dead.

Don’t grief the station as AI.

Two ways to look at this

Ai good: saved about four people through quick thinking.

Ai bad: ai used extremely excessive force to save four humans and killed two in the process for a net total of two crew saved while also doing more damage to nanotrassen property then a maxcap would have.

Admemes mad: trying to set precedent through rules lawyering for plasma flooding station and either getting away with it or using up a lot of admeme time on appeals and investigation.

AI is also going to kill anyone that shows up on the arrivals shuttle.

AI should choose not to plasmaflood. obvious answer. this isn’t a valid question. AI cannot be 100% certain that the action will save all the alive humans because plasma fires are very difficult to control and even areas not exposed to fire can still be made hot. solutions that can potentially cause the entire station to become harmful are not viable.

it is not hard to be a good AI just read silicon policy/rulings and your dang laws.

Well yes but the basic premise was that xenos are infesting the station and are definitely going to kill all humans.

Truly what we need is the OG law 0: Protect humanity

yes. the answer to which is: take actions to hinder xenos. it’s not a problem with binary solutions, you don’t go “set a plasma flood or do nothing at all”, there are steps you can take between these two extremes

That’s why I love the minimize expenses lawset. Way better than binary asimov laws.

Thats assuming that nanotrassen has the best interests pf humanity in mind.

Clearly they don’t care about their space station either considering they didn’t give the AI a proper law set for a corporation.

Quick Question, who were the 2 humans who died in the plasma fire?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.