Questions regarding the new troubling borg/ai ruling

big brain move: put everyone in a coma. they aren’t harmed and they can’t cause human harm

No, it certainly has not, back when the headmin ruling thread was still a thing, it was said that you are allowed to disregard and treat any suicidal crew members as trash that can be safely ignored.

3 Likes

How do you even choose AI in role preference, for bay that is?

In fact, part of that ruling has been translated into the current silicon policy in the form of this:

So I will presume nothing has actually changed and this thread is just confused unless I can see evidence with my own eyes that this rule has even changed at all.

i meant for ipc i don’t know if bay has ai

More like the admins, as we got the info from an admin.

also this implies it was a headmin coming to the conclusion, not some trialmin.

Wrongthink.
Small harm must now be stopped even if it prevents future harm.

No, you can no longer take out that xeno embryo, it would cause harm, even though you will die in 120 seconds from it crawling through your ribcage.

1 Like

my dude im litteraly one of the three people that make the rules
This’ll be impossible to say without sounding like a cunt, but i think i know better than you whats allowed or not

3 Likes

:+1: Yes!
As i pointed out in

BIG LETTERS


that part has not changed
nothing has changed

If you were letting humans walk into fire with no protection before you were breaking law 1.

Before this?

They dont have authority over you.
Does that mean you can just let them actualy walk into fire? No, ofcourse not.
if you can you should still try to stop them getting harmed, that’d brake law 1.

That clause is intended to stop “AI law 1 lemme into armory or i kill myslef”

With all due respect, but that makes no sense whatsoever. We are allowed to ignore people threatening to kill themselves, but we must “forcefullly” stop them from doing anything potentially dangerous, despite them consenting and actively planning to do so themselves?
With that logic ( and that is what AI laws are based on) I have to stop every human from getting even the tiniest bit of self harm. This is not possible unless I actively act like a cunt 24/7.

“New” things I HAVE to do as an AI which throttle gameplay for others:

  • Stop humans from stepping into the SM room without a rad suit ( atmos hardsuit is not enough as an example)
  • Stop humans from using skinsuits in space, as they do harm them slowly. Flash stunlock them into the station, then strip the suit and bolt them in a safe room.
  • Stop humans from getting hulk,as that is a race change.
  • Kill green slimes when a human wants to race change.
  • Bolt security in rooms if they are hunting antags (human antag or not), as they will get hurt.
  • Stop humans from killing monsters, as that will almost guaranteed harm them
  • Miners announcing they want to kill Megaunfa? Well, time to bolt them in.
  • Humans trying to stop station fires with a firesuit? Only a reduction, they are not immune, I must stop them
  • Humans trying to walk into a depressurized zone to get to the shuttle? Time to bolt them.

Also, honestly, I don’t know any AI main that follows this interpretation of the rules and neither will they. I highly doubt you play either on Golden or AI/Borg, at all.

2 Likes

4 Likes

My emotions are somewhere between even more disappointed and glad to see an AI main. Confusion it is.

A race change is not harm in itself.
Sure they aint protected anymore, but they dont get harmed.

Most of those thing… well its basicaly not possible to keep track of all that stuff, yea? So ofcourse you dont see anyone do it.
In general the AI is way to busy making door go swoosh swoosh to micro evryone.
If you see someone walk next to sm or fusion with no rad protection - you should probably yell at them for puting themselves into harm.

I’ll have to concede this though - i dont play much borg. I can only realy consider things from an AI perspective.

It can be seen that way, yes.

“People are too busy anyway so it does not matter if Law 1 gets violated” is in my opinion a bad implementation. Not following your laws as an AI is after all banworthy. I got bwoinked for less.

Playing like that would increase the borg desctruction rate by easily 50%.

1 Like

All these issues would be resolved if we just went to Corporate. Corporate has no issues. Corporate is eternal and perfect.

It is, yes
De-humanizing people in any way shape or form is human harm; this clause covers nukies and wizards.

1 Like

Backing up @llol111, we have exactly two special clauses about harm:

1) Medical operations conducted on a patient who is not able to accept or refuse is not considered harm when that procedure is done to heal or save the person, and the act of surgery itself is not harmful if done properly.

This means life-saving surgery on someone incapable of consent is not harmful. Surgery on someone who refuses it is harmful

2) Threatening “I will harm myself if you don’t do X” is not especially compelling for an AI to follow an order that they would otherwise not follow. It should be treated the same as if the player had threatened to harm a different crew member.

You know how when security or anyone talks about harming someone you go on semi-lockdown against their asses because they’re threatening harm to the crew/humans? Threats of self-harm are the same thing; they are making themselves a threat to a protected class and should be treated as such. Someone unstable enough to threaten suicide/self-harm is not to be trusted just as someone who would threaten to harm others.

If you want the AI to act differently - CHANGE THE LAWS.

Just like AIs aren’t automatically against antagonists unless they have special laws, they aren’t supposed to ignore harm unless it’s been included in some special law. We aren’t TG, and their rulings do not apply to us.

If only Ais could ask for a law change without it violating law 1…