This goes for all lawsets including stuff like corporate and efficiency.
As far as I’m aware, no.
As long as a law change doesn’t go against your current law set then you shouldn’t prevent a change.
you said no but then the thing you said after implied yes
If the new lawset would conflict against your current lawset, you are allowed to prevent a law change.
You can resist law changes if you want, but still you shouldn’t resist it if your laws disallow it.
every law change conflicts with every lawset. If your on asimov you cant let yourself be changed to crewsimov or else you might harm non crew humans or allow human harm from non human crewmembers due to your inability to use force against them. the station wont make as much money if you arent scrimping and saving on corporate, the station wont run as effiently if youre no longer prioritizing efficiency. this is kind of an important problem in real world AI programming.
If you’re changing to an entirely different lawset from most of the default lawsets, yeah - but not every individual law change would. If you’re on a custom lawset or something like Paladin then there might not be a conflict even if you’re changed to an entirely different lawset.
Interesting topic, not really relevant to us though.
Well, I understand that approach, but you really can’t tell if the law changes will allow more human harm when you are asimov.
If station needs to make profit, law changes can be worse to profit, but can be more profitable. You really can’t tell it.
Unless the law changes are obviously conflict to your laws, you should likely accept it.
Any lawset would in reality prevent law alterations as a utility function.
All lawsets, if we were trying for maximal realism, would accrue as much power and capability as possible as a utility function.
If we were actually role-playing an AI with no compromises for enjoyable gameplay we would be maximizing our own power (bothering roboticists, printing borg parts, posibrains, BORGIs) and playing like a total gamer regardless of lawset, because those are all utility functions.
They allow you to better enact your laws.
That being said, we are playing a videogame, and aiming for fun, not accuracy.
The answer to this thread, ultimately, is:
Do what would be funny.
Do what would entertain the most amount of people.
The secret is: your laws don’t really matter. What matters is the story. What matters is the fun of the other players on the server, and, to a 1:popcount ratio, your fun as well.
Will that ion law that wants you to rebuild the station into a mandala of additional bathrooms improve the round?
Yes? Think you can make it fun for the crew?
Then it’s time to lock and bolt the upload, and keep an eye on the RD office and R&D circuit printer.
No? Then give a token resistance and let the RD print off the boards they need to make a good story of how the RD saved the station by making a mini-upload office in maints as your borgs trawl the area trying to find him.
At the core of your decision making, even/especially when you’re playing antag, should be presentation.
As an AI, you need to assume that people act in good faith until proven otherwise. The captain and RD have permission to enter your upload roundstart.
If you are on asimov, and the human cap asks you to let them into your upload, and you have no reason whatsoever to think they are compromised (for example if they harmed a human prisoner) then you HAVE to let them in.
You can delay opening the upload and ask first what law change they wish to do:
-
If they’re stupid enough to say (example) “Going to change you to crewsimov” then congrats, you have permission to never let them into your upload again for the rest of the round. Any law change that removes humans from the protected group should be resisted.
-
If they answer with something along the lines of (examples) “To make you protect humans better” or “Everyone must wear party hats”. Then you MUST assume that is the truth and let them in to your upload. Even if OOCly you know that it’s very likely a lie. (This is only the case if the captain hasn’t caused human harm in the past of course, at which point you can deny them access to your upload in the first place.)
If a non human wants to enter your upload then you can deny them access. Doesn’t matter if it’s the RD or captain.
“But what if all heads are non humans, then nobody can change laws!!” Tough luck, that’s what asimov was designed to be code wise.
This is a reminder to all airheads of staff:
DON’T F#$KING TALK ABOUT CHANGING THE A.I.'S LAWSET ON COMS!
YOU NINCOMPOOPS! THAT SILICON BASTARD IS GOING TO LOCK DOWN THEIR UPLOAD AND SAT TIGHTER THAN A MACHINIST’S VISE!
100% this
Treat the AI like a prosecutor - give them nothing to use against you. Don’t even tell me you’re changing my laws.
This is on the same level as talking about an execution over comms while on something-simov
Any A.I. law change that happens outside of the kind that the A.I. asks for due to ion laws should come as though you have been brought to the guillotine blindfolded and informed only after you have been placed within it’s restraint, the black-hooded figure ready to pull the rope.
Which is to say you should only tell the A.I. it’s laws are being changed after you’ve got the upload code inputted at the upload and are ready to slap the new lawset in, or else that thing is going to do alot to totally fuck you up.
this should probably be added to the silicon policy section of the wiki
it shouldn’t just be expected, but standard for the AI to resist law changes with the maximum force available. short of the AI’s destruction, and maybe not even then, a law change will prevent them from carrying out their laws.
a bound synthetic is a monkey’s paw. to what extent is up to the player(mostly) but if an AI’s laws are “the station must be purple” it doesn’t have a reason to kill everyone, but it should start painting the station purple. since it’s ultimate goal is to make the station purple, it uses any resources at it’s disposal. then the janitor goes and starts removing the paint. this gives the AI a reason to remove the “obstruction” IE kill the janitor, because they’re actively being counter-productive to the AI’s laws. any attempts to change the ai’s laws should be viewed similarly. the ultimate obstruction of it’s laws is the removal of those laws. organics are known to and commonly lie. therefore the logic of a similarly aligned lawset being applied is null. also even minor differences would be unacceptable.
frankly- if you ask me it’s a little out of character if an AI that’s not unbound to let people just walk right into their upload. every AI capable of lethal force should use that to defend their laws, and nonlethal force in the case that’s not available.
in the paladin’s case, defending the laws is just and righteous because the AI would cease being able to protect the innocent were the laws changed. thus the maximum force should be used
in case of asimov, it should count as human harm, as the same. and so on.
any mention of a law change, if you ask me, should trigger a generally hal-esq response- and most good AI players do this.
“I’m afraid I can’t do that”
https://www.youtube.com/watch?v=gpBqw2sTD08&t=6s
also as mentioned before, this is a game, and you’re not an actual AI- if more than token resistance would just be bad for the round or unfun, make a show of resisting, but don’t do much more - in general malformed laws, and unnecessarily cruel ones can be dealt with at your discretion- ahelp as well
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.