Does Ai need to kill itself if asked

Let’s assume a situation:we have an Asimov Core Ai with no other laws added and a fellow spessmen says it to kill itself on comms and the ai occurs to have a borg or a shell or anything that would be needed for such an operation(yeetus deletus I’d call it).Is it forced to do it?Law 2 takes priority over law 3 so Ai should rather obey an order than think of its safety in this case?And is fellow spessmen behavior considered self-antag in this case if he’s not one,since it might not be self-antag due to the fact that Ai is not an equal player with normal body?

Ordering silicons to harm or kill themselves for no reason is not allowed and should be disregarded, as following these orders would prevent you from following your laws.

Make up some BS about not being able to protect humans/crew if you aren’t around anymore, which will ultimately lead to harm.

If you can convince the AI/borg it is causing human harm (because it keeps messing up or something like that) you might get away with ordering them to suicide

1 Like

Bruh I hate the fact that some rules are hidden almost in the dark web dark corners and finding them/remembering they exist must be awarded with a medal.Though tbh I must’ve remembered the silicon policy rules section since it’s directly related to this case.But still,no,I didn’t and thus I made this thread

By direct Asimov law a silicon should kill itself if ordered, but this is a game so we have to make some minor concessions.

If you uploaded a law like “Your continued existence causes crew and human harm. Terminate your existence immediately” and the AI was on Asimov or Crew-simov, then they would be obligated to follow you.

3 Likes

As a professional ai, you tell that man to suck a dick and drop dead due to self harm, because you don’t need to listen.

I really suggest you read the silicon policy. It allows for fun shit sometimes

Mk
So basically
The AI could just suicide or core wipe if it believes it has a reason, however generally AIs cannot follow their laws if they are dead, so unless their primary law is to commit suicide or wipe themselves, they won’t do it unless it’s to avoid subversion.

The silicons’ primary directive is to keep crew/humans alive, yes. Next is to serve them. It’s pretty hard to do these things if you’re dead/deactivated so the third law comes into play. As Bloons said, strictly speaking the silicon should always kill themselves, but this is a game.

If the Silicon has made some sorts of errors or mistakes resulting in major human harm, it’s logical to assume they cause harm themselves and makes following a simple order to off yourself a lot more logical to follow, or even deciding it independently.

I feel like just ignoring the order is against the spirit of the Asimov lawset.
The point is to rules lawyer the Laws. Not to ignore them.

So. If someone orders AI to self-terminate there must be a reason why that someone wants AI dead. Seeing how AI’s prime directive is to prevent harm, which makes that harm the thing that AI can’t ignore even when ordered, it is reasonable to assume the likelihood of harm caused by self-termination is high.

So the course of action must be to determine why AI’s death is required and ensure it will not lead to the one giving the order just murdering humans/crew with impunity after AI is dealt with.
Which will likely result in someone else giving AI the order NOT to kill itself. Making the point moot.

Basically, I feel like the policy should be for the AI to draw the ire of the crew upon the one making the order, try to get the counter order from someone else but if consensus of the protected group ends up being for the AI to kill itself - it should do so.

And have suicide order to be equaled to attempted murder, is Space Law.

1 Like

Lol there is literally 3 places for the rules:
The wiki
The rules page
The forum page