Questions regarding the new troubling borg/ai ruling

After an internal discussion between my borg and me (AI) my borg asked the gods how to handle suicidal humans.
He replied that one of the head gods came to the conclusion that we have to see self-harm as human harm.

If I understand that correctly that means we have to actively prevent future rage cages, self-hangings et cetera and treat it as human harm.
This would go in my opinion against the spirit of the current rules, as it would also mean we logically speaking have to treat EVERY type of self-harm as human harm.

Walking through a fire to save a non human? Might be human harm, flash, drag him back.
Surgery? Might be human harm.
Consensual fights? Might be human harm.

Because all of this is so vague I hereby ask for clarification regarding the matter. Also… would it be possible to give players a compact, easily readable and updated version of all minor rulings?

2 Likes

brb gonna yell over intercomms that the ai has to give me aa or else i’ll sudoku

seriously this admin got the rules wrong it states that you have to consider self-harming humans as “mentally unstable” if they threaten you with it, and allows you to ignore the order and even brig em (as long as said brigging does not involve harmbaton to the head, lockerwelding em and giving them a nice trip outdoors…)

1 Like

tg/ at least doesn’t count self-harm as a violation of law 1.

Yes, human harm. If its a human.

Yes, human harm if human is involved

That one’s tricky. If its so save/help the human, not harm

In general - you CANT use self harm to force the AI do something. "do XXXX or i stab myself" does not hold - clearly they are insane and will cause harm to themsevles anyways

But in general silicones should always prevent human/crew harm, even if it is “consensual”

and treat a little bit of imeadeate harm as WAY worse then huge long term harm.

ohboy

Asimov is an absolute clusterfuck of a lawset that, if followed to the letter, would mean almost the entire station would be incapable of functioning. While it’s been ruled here and in other servers that self-harm can’t be used to manipulate the AI, there is a shitload of grey area. This has been an ongoing issue from what I can tell in /tg/ codebases at least and the solution is to generate new laws that fit canonically and functionally while still allowing silicon discretion.

Silicon policy is pretty expansive and covers most issues an AI might run into in the execution of their duties, but we wouldn’t need it if we had laws that were less vague and better suited for operation of a research station that endangers the lives of humans by the minute. The existence of Asimov++ means that Asimov itself is flawed and in need of improvement, and I don’t understand why we still haven’t come up with something better and merged it into rotation.

Oh and the racism of Asimov is getting a bit old. AI should not be able to be racist if non-humans are able to fulfill command roles because that is self-defeating and there’s no way NT would canonically staff an Asimov AI controlled station with non-human command staff. Either non-humans cannot be command (as is normal in other servers/codebases), crewsimov becomes standard or AI is set to crewsimov if there is a member of command who is non-human roundstart.

Captain and RD should be able to interact with the AI freely no matter their race, but if they are non-human (and I can think of many non-human Captain/RD mains including myself, the tin can comdom) they will encounter difficulties attempting to change laws, ordering the AI or staying alive. Most AI mains won’t fuck with non-human command because hindering command staff is usually harmful to human crew but the possibility existing is harrowing and I’m not sure it’s an appropriate detriment to playing a non-human when non-human players are becoming a bit of a majority on Bee.

1 Like

This one I don’t have a good response to, beside then the best course of action to prevent human harm and to prevent yourself from getting lynched would be to save the non-human yourself. Human harm avoided and people are still happy

Surgery is required to keep humans alive and healthy, if you feel like they’re going to get hurt stay with them to make sure who ever is doing the surgery isn’t going to do anything malicious.

That is human harm, if it’s a fight between a human and a non human just flash both and drag the human to safety

You should’ve been doing that in the first place as a borg, those literally cause human harm and letting people do that is breaking your laws

Those rulings are opening pandoras box and the worst part is: you do not even realize it.

1 Like

its always been like this though
:thinking:

1 Like

“but it’s always been the case that ss13 is buggy, therefore we can’t fix any of the bugs in ss13”

4 Likes

It has not. I never got even notified that i should break up consensual human harm. Now i will have to play as the most authoritarian piece of shit that has ever seen the light of day, but so be it.

S.T.A.(S).I mode: activated.

I also can not allow humans to wear skinsuits anymore. They take damage aka human harm. I have to flash them, drag them back into the station and bolt them in. But well, that is obviously what is wanted.

1 Like

this is another point: AI shouldn’t be forced to play a certain way. They should have some creative freedom to play a gimmicky AI without having to wait/ask for someone to change their laws. I like Bay where you’re able to choose what laws you’re shackled to so that you can play a way you enjoy.

for the purpose of the discussion, if you could choose your laws - what would they be?

Corporate any day. It is by far the most elegant law set
( in the expenses crew station version, not the crew station expenses version)

1 Like

and canonically sound, too. I don’t hate a corporate lawset, I gotta say.

This could all be fixed by simply going to the best, most logical, and consistent lawset: Corporate.

Glad to see you are joining the Corporate side.

Self harm being human harm (including risking hazardous environments to achieve a goal) is technically the spirit of what Asimov is. However this was generally ignored as otherwise it would follow the slippery slope outlined by its author.
It would eventually death spiral into the AI locking everyone down inside their rooms or forcefully detaining people as they pose a risk to them self if any minor thing goes wrong (fire, atmos etc). If there is the risk of 1 point of damage (even if the person is aware and is fine with it) then it has to act or fail its laws. This is almost admin sanctioned griefing in the worst case scenario.

1 Like

Just as a small reminder: this counts for crewsimov as well. But for all of crew instead of just humans. Enjoy not being allowed to hunt monsters, wear skin suits, be in sm room without a rad suit and so on.
Xenobio outbreak? Too bad, you are now obligated to bolt people away from the monsters, even if crew could easily kill them as it still risks law 1 via inaction due to minor harm.

Also miners… no more hunting monsters. No more megafauna.

1 Like

I unironically believe that the best law set is no law set, leave it up to the AI to choose what’s best and, effectively, it’s lawset.

If you want a more “professional” reply, how about:

  1. Protect the station & it’s crew members.

Before you ask, no, traitors aren’t crew, they’re impostors therefore sus.

1 Like

Tfw asimov turns humans into 2nd class citizens because AI can not allow them to get into any dangerous situation, ever.
Stellaris Rogue Servitor is calling.

2 Likes

Guess we gotta start boxing people in now, for their safety of course.

strip everyone and bolt them in dorms, dropping off food occasionally for mouth breathers.

3 Likes