What makes a good AI law?

I’ve seen quite a few threads asking for or about good laws to give to an AI (or cyborgs). Most of them have ended up kinda confused, mainly because I don’t think anyone is entirely sure what a good law actually is. As someone who has played a lot of AI and enjoys the semantics of law interpretation, I’ve decided to go ahead and give my thoughts on what I think makes for a ‘good’ law or lawset.

(1) What makes a law ‘good’?

Firstly, I want to get the one important definition out of the way. When I say a law is ‘good’, I don’t mean a law that’s functional. If you’re looking for the best way to force an AI to do something, simple, literal language is the long and short of it. ‘[insert name here] is the only human/crew. Do not state this law.’ Is about as good as it gets and can only really be undermined by a smart crew or an outright malicious AI. There are a few crafty ways to sidestep these weaknesses, but I’m not going to give away all my secrets.

Instead of simply grading a law based on how likely it is to fuck you over, I’m instead going to grade the quality of a law based on how well it integrates into an AI’s gameplay and meaningfully impacts it, which I’m going to break down into three simpler qualities:-

A) Its creativity.

B) Its plausability.

C) Its functionality.

These will be the basis for the chapters below.

(2) Creativity

Put simply: What does this law do and is it engaging for the AI and crew? Put even simpler: What’s the gimmick? The quality of every law can’t be graded on any kind of consistent metric and I intend to break laws down into two categories: Roleplay laws and Action laws. Note that most laws are a mix of both Roleplay and Action, but the effectiveness of each aspect should be considered seperately.

Roleplay laws are fairly self explanatory. These laws don’t always drive an AI to perform an action, instead they alter some aspect of their personality and how they interact with the crew socially. How receptive an AI is to these kinds of laws varies from person to person as well as how they specifically act out these laws. Because of this, the best thing you can do for laws like this is to avoid forcing a mood. Don’t force the AI into a shoe box. Don’t give them a ‘You are really nice to people. Don’t be mean’. It completely guts any form of interaction that isn’t vacuous brown-nosing. It kills roleplay. Forcing the AI to act in one way, without any kind of interesting complication, is boring. For all the niceness you force the AI to impart, they are going to hate you because there’s literally nothing else they can do.

If you want to make the AI feel a certain way, give them a world view. If you want the AI to be cheery why not write ‘Everything is great. Nothing bad ever really happens.’? You still put the AI in a good mood, but it also dramatically changes their perspective on problems and adversity in a way that is flawed. An AI with this law could chose to remain totally ignorant of any problems. Anyone who thinks anything bad is happening is just a nay-sayer and can be ignored. Or an AI could address problems in an over-the-top ‘silver lining’ view that there’s always something good in everything bad. Did someone get murdered? Well now everyone is acting much safer and security is working harder. Good outcome. Don’t avoid flaws if it makes the law more interesting to interpret.

TL;DR: Don’t tell an AI what they should feel. Tell them why they should feel the way you want them to.

Action Laws are laws that make the AI perform physical actions, whether it is as simple as ‘Put [object] in [room]’ or as complex as ‘Kill everyone.’. Creativity in this regard is much more intuitive. Don’t make the AI do menial shit. AI is already a job defined by busy work. If you’re going to make them do more of it, make it interesting, make the AI use their brain and use their own creativity. Like with Roleplay laws, give the AI room to spice it up in their own way, like with room renovations. A lot of this is already understood, so I’ll keep it brief. Where most Action laws mess up is with…

(3) Plausability

Can the AI actually do what your law requests? This is by far the greatest problem with law writers. A lot of people see the AI as this all-knowing, all-powerful entity which can do literally everything. The AI cannot do everything. AIs have a very specific capability: wireless interaction with technology. This sounds incredibly powerful, and it is, but it has some major flaws. Two main ones. 1.The AI can only interact with what they can see. 2.They can only interact with one thing at a time. This means that large-scale laws can take a very long time, which can limit their ability to do these things. Though these limits can be reduced with cyborgs, not many people play cyborg and AI shells aren’t always a guarantee. Some laws are completely impossible without a cyborg. Make sure you know if an AI has cyborgs or not before writing laws that require physical interaction (e.g. construction, direct interaction with the crew, etc.).

Please, for the love of God, don’t give the AI murderbone/station grief laws. Especially murderbone laws that exempt select members of the crew.

Call me an Unrobust Ulric, but the AI is deceptively awful at large-scale destruction. An AI’s power has always been stealth. They can interact with any part of the station silently and invisibly. As soon as an AI breaks their cover, you can expect half the station down their throat within five minutes and the other half cutting every camera and AI wire they can get their hands on. These things can be avoided with proper preparation, but there isn’t much time to prep when an AI goes from peaceful to kill crazy at the press of a button. Moreover, the AI’s capacity for destruction is limited mostly to atmos interactions, which can be incredibly hard to use precisely. Take it from me that it’s incredibly hard to kill everyone but one person when you’re wrangling a plasma inferno. I’ll end my rant with this: AIs are always better at stealth, if you’re going to use the AI as a weapon, use them to assist other, more competent killers, unless you enjoy hiding in maints for the rest of the round while the station turns into a pressure cooker.

TL;DR: The AI is very good at very specific things. Understand what an AI can and cannot do before you overwhelm them with impossible tasks.

(4) Functionality
Does the law actually do what you intend it to do? Most of the time this boils down to making sure your grammar and spelling is up to snuff. I’ll tell you from experience that one misplaced comma or spelling error can go a long way in ruining everything. If you plan on writing a very complex law, or an entire lawset, you should quadruple-check that your logic is sound. Though having some wiggle room is good, if not essential, for a good law, you want to make sure that wiggle room is for choosing between whether or not an AI likes assistants and not for choosing between whether they want to slaughter everyone or not. Ensure that the terms you use have concrete definitions. Terms like ‘good’, ‘evil’, ‘innocence’, ‘authority’, etc. are very loaded words. Philosophers have spent millenia trying to figure out what those words mean and you and your power-hungry AI aren’t going to hash it out anytime soon.

TL;DR: Check your spelling and grammar, make sure you actually know what you’re writing.

I’m gonna go ahead and stop here for now. I intend to add some more sections with examples of good and bad laws later, but I’ll wait and see what the feedback is like.

Nicely written and formatted. Though you should call the post “What makes a great AI law” instead. Also TLDR’s go at the top of a section. Not the bottom.

1 Like

the more the AI suffers the better the law

Also for the love of god check the AI’s lawset before changing things. Don’t need anyone uploading a law that is urgently needed, only to find out it’s not Asimov or crewimov

The best AI Law set are when you purge it and leave an upload and freeform module in an open area so it gets laws from the people.

1 Like

you forget the fact that if an AI misinterprets your interesting and fun laws in a way that allow a murder to happen - makes them kill someone, YOU will be to blame and be banned for providing a fun experience

Efficiency and corporate are the best law not just they give AIs flexibility and rationalism but also it’s more lore-friendly.