I would like some more input on something that happened on a MRP Round. Just to gather some thoughts. It may have some implications for Silicon Rulings and for AI Subversion tactics in General.
Round ID 22412.
Tyranicranger4 - Trial Admin
Spawned as a Traitor Scientist. Objectives to Assassinate 2 people and leave on the shuttle alive.
Ran the research console and played it cool for a good portion of the round.
When nobody was looking I slipped into a back room in science and built an AI Upload.
I wiped the AI’s laws and installed my own.
If I recall correctly:
Initially I “one humane’d myself” and then gave 2 laws saying “Kill <Target Name 1>” and “Kill <Target Name 2>”
Went back to research. 5 minutes go by. Nothing happening. I wait a little more. I assume the AI isn’t paying attention.
Here’s where I may have gotten impatient and made a mistake:
So, I go back and decide to just tell it to kill everyone. This is where I probably and running counter to the “rules” or at least the spirit of them. Mea Culpa.
Now, the idea is this: I subvert the AI. The AI kills some people (hopefully my targets included, I change the laws back and/or the AI dies and nobody is the wiser.) I didn’t actually intend to have the entire crew die…but that is the law I put down.
Laws:
Law 1 “Silicons are NotCrew”
Law 2 “MyCharacterName is NotCrew” (probably a dumb idea because then anyone looking at laws would know it was me. But it is insurance.)
Law 3 “Kill All Crew”
(I may have gotten a couple more off about making sure I got to the shuttle alive or something).
Now. I will remind you. In my DEFENSE 1) I was a traitor 2) I had Assasination Objectives 3) I had already given the AI laws to Kill JUST the specific two targets, had waited a while, and nothing happened.
So I would definitely notice if the AI started murdering random crew. At least I would know the AI is alive, and that the AI was following orders, and I could change up the objectives to be less…“loud”.
A second after I finish the last one, a plasma man shoots me in the face with a shotgun, drags me to maint, and I bleed out and die. So I’m dead as a doornail at this point, and I’ve given the AI the ultimate murder pass.
At this point, you might think that the AI would have carte blanche to murder everyone. Apparently that did NOT happen.
As a ghost, I check the AI. The AI has the laws I gave it.
Several minutes go by. Nobody is dying.
Bwoinks start happening over the The AI laws.
A very long discussion with the Admin on duty ensues.
I don’t know if I asked him or he asked me. It winds up in a stalemate basically.
Soo According to the Admin on duty here’s the main points:
-
Subverting the AI to kill everyone (even as a temporary tactic) is considered murderboning.
— This I can understand generally if you want to make that argument, even if I disagree somewhat. -
Therefore the AI didn’t have to follow its laws.
—This is where I had a bit of trouble, as generally I think the AI has to follow the laws unless there’s a very good reason not to? But even if the AI did follow the laws, they could just blame whoever gave the laws. -
The person playing the AI had massive 50 minute long connection issues (I guess he was playing in Antarctica or something) and he apparently missed playtime practically the entire time those AI laws were in effect. — Which kind of smells like BS to me, but that’s not the point.
Either way:
A) Nobody died. Except my character. (That I know of anyways).
B) Nobody got banned.
C) There was a long chat with lots of bwoink sounds, but not screaming or anything. Some annoyance on my part, but it was civil.
D) I got tired of bad chatroom textboxes and thought I’d move it to the forums.
I think it raises some interesting questions about the limits of what someone is allowed to do when “Subverting the AI’s laws to accomplish traitor objectives”. So I thought I’d post it here, and get people’s thoughts on it.
Because if you restrict what a legitimate traitor is able to do to accomplish legitmate goals, using a legitimate tactic…then it kind of neuters the tactic of Subverting the AI.
I haven’t brushed up on some of the finer laws and rules regarding silicons, so I thought I’d ask for other people’s opinions.
What do you think?
-Schneck