Dancefloor Degrader - Lacran

Status
Not open for further replies.

Lacran.

Assistant
Byond account and character name: Lacran - Spooky Station
Banning admin: Dancefloor Degrader
Ban type (What are you banned from?): Note
Ban reason and length: Note regarding turning core lethals on round-start on sat and announcing it to crew: "
As an asimov AI, set turrets to lethal roundstart before any credible threat had been introduced. Considers any human harmed by these turrets to be self-harming and therefore not held accountable by their laws"
Time ban was placed (including time zone):12am (5358 round)
Your side of the story: I bolted my sat and set the inner turrets inside my core to lethal, after doing so I announced: https://gyazo.com/842c0a129d88734d2961d32685f4093a after centcom announced it was extended, I set them to stun again.
Why you think you should be unbanned:


I understand that notes are incredibly minor, but I don't think it warranted a note, and I took alot of consideration before I did something like this, so I genuinely would like to see a better argued reason as to why this could be punishable.

I've been reading silicon policy for a bit, its also stated in the server rule 2b "self-harm is not human harm" and in the TG A.I job page: "you are allowed to ignore "consensual" harm such humans willingly and knowingly demanding access to hazardous environments." and in the TG silicon policy:
"Humans can be assumed to know whether an action will harm them and that they will make educated decisions about whether they will be harmed if they have complete information about a situation." (All of these are listed in the Austation rules page on silicons)

Under law 3 I need to take steps to protect myself, there are many non-human threats that are easier to deal with using lethals, especially the ones that go straight to my core like wraiths and lings.

So I created a hazardous environment, I informed everyone to the best of my ability that it would be hazardous, this doesn't violate law 1 as all humans now know it is hazardous to teleport into the core, this doesn't violate law 2, because no human requested me to not do so, and this is totally inline with law 3.

the issue stated was the lawset, I'm following the lawset by notifying crew of the danger and by still being compliant should crew require access.
 

Nus127

Janitor
Game Admin
Imagine complaining about a note, especially one that makes sense. Why, at any point, turn on lethals for no reason
 

Lacran.

Assistant
Imagine complaining about a note, especially one that makes sense. Why, at any point, turn on lethals for no reason
Imagine doing nothing but making low effort and snide remarks on posts you don't even read.
I've explained the reason, and the rationale in the post you just replied to, explain why they're wrong.
 
Last edited:
The thing is people can join after you make the announcement and they wont know. Anyway setting your sat to lethal is a big no no for any asimov AI. What if there was a wormhole event and a human teleported into your sat? Whoops you broke law 1 since you valued your own life over a human. "Consensual" human harm is to protect people who build rage cages and people who visit lavaland since people know that those places are hostile and will involve you getting hurt.
 

Lacran.

Assistant
The thing is people can join after you make the announcement and they wont know. Anyway setting your sat to lethal is a big no no for any asimov AI. What if there was a wormhole event and a human teleported into your sat? Whoops you broke law 1 since you valued your own life over a human. "Consensual" human harm is to protect people who build rage cages and people who visit lavaland since people know that those places are hostile and will involve you getting hurt.
That's true, and a good point, but will these people be trying to teleport directly onto my core, or breaking in, is that a likely thing to expect? Because if any of them need or want access to the core than they get it legitimately, lethals on would only effect humans trying to get in illegitimately. A player can't be prepared for all eventualities, if we bring random events into the situation then lethals are both a great and terrible idea depending on the roll of the dice, self preservation is also a part of the lawset.

Consensual harm is used in the context of things like lavaland or space yes, but the same logic applies to any environment being entered wilfully and knowingly.

I can't be expected to be prepared for all random events, you can really only judge the conduct by the context its occuring in, I think lethals on the turrets closest to me is a reasonable action under law 3, provided I take all reasonable measures to comply with law 1, which I did.
 

TheFakeElon

Security
Game Master
yep, you should do your very best to prevent the potential for human harm as much as possible. Threatening a human and upholding it doesn't exempt you from law 1
 

Lacran.

Assistant
yep, you should do your very best to prevent the potential for human harm as much as possible. Threatening a human and upholding it doesn't exempt you from law 1
This doesn't apply to self harm, which is what I'm pointing out. Explain how someone teleporting themselves into my core, knowing there's lethals on is anything but self harm.
 

Lacran.

Assistant
I get that the purpose of an A.I under Asimov is to serve humans, making my sat hazardous only for the humans that I'm unable to detect entering, that are also endangering themselves on purpose by doing so doesn't run counter to asimov, the potential for human harm is so miniscule, and even in the unlikely event a human gets harmed before I can intervene, its because of self harm (unless someone/something else teleported them which laughing pointed out), which isn't human harm when you look at TG's silicon policy. On the plus side, I get to protect my core from literally everything else.

I'm not trying rules lawyer my way out of a note and I'm not trying to get anyone in trouble, I have an interpretation of asimov that clearly runs counter to some if not all admins on the server, and I'm making an honest attempt to understand why. I know human harm is bad, and my primary goal before anything else is to prevent human harm, but there's plenty of rulings around human harm, and from what I understand this should be applicable, and if it isn't I want to know why.
 
Last edited:

TheFakeElon

Security
Game Master
okay this is completely stupid. With that logic an AI could flood distro with plasma and "not be in a violation of law 1" because people didn't get into maint. Or as another admin said, shocking doors and blaming them for touching the "off limits door".

YOU are potentially putting a human in the way of harm, self harm is something they do to themselves that you can't really prevent, this is to prevent people from being able to override law 2 in every case. but when you purposely set your turrets to lethal when you have a non-lethal option, which would reduce human harm. Not only that but the non-lethal option is actually way stronger than the lasers.

We shouldn't even need to explain this to you.
 
Status
Not open for further replies.
Top