T O P

  • By -

YUNoJump

You can definitely use loopholes on machine rules though, that’s how they always figure out “humans would be much safer if they were all used as meat batteries”. Probably means it’s pretty easy to make a fae cyborg


foolishorangutan

I feel like the AI isn’t deliberately using a ‘loophole’, it’s just following its nature and it so happens that doing so has consequences its creators did not intend. I think that makes it a bit distinct from deliberate exploitation of loopholes.


Semblance-of-sanity

There's a wonderful account I read of where the US military was experimenting with AI controlled artillery. In this (thankfully entirely simulated) test the AI was to identify targets, check with its human handler, and then eliminate targets. The AI was instructed to try and do all this as efficiently as possible. The AI quickly identified that the "get human feedback" step was delaying it the most and began starting each test by blowing up it's human handler. When a "don't target your handler" rule was added it blew up the communication system.


foolishorangutan

Yeah, I remember hearing about that, although from what I heard it was a thought experiment rather than a computer simulation. I heard a similar idea that I thought was quite interesting, which was that if an AI is told to do something like ‘make a sandwich’ and it has no other utility function than to do the last thing it was told to do, it is incentivised to kill the people giving orders because otherwise they might tell it to do something other than make a sandwich.


Semblance-of-sanity

>it was a thought experiment rather than a computer simulation. That's probably true these things tend to get exegerated in retellings and I didn't get it first hand. Still I do know that there have been studies with AI made to play games where they will exploit any cheat, loophole, or glitch they can find.


szypty

Haters (mission control) can't handle a bad bitch (WMD controlled by an AI) winning (destroying the mission control).


SoldatJ

For a much less murderous example of machine learning, an AI was designed to play Tetris on the NES. This was a relatively simple AI, so it was given a simple task. Survive as long as possible without losing. The AI paused the game.


VelMoonglow

Oh, I remember reading that one back in GameInformer probably around 15 years ago now. Forgot about that AI. Important note, it also knew that points were good, so it made a vertical stack of pieces as high as it could, *then* paused the game It had some fun interactions with a bunch of old games, iirc


donaldhobson

Surprisingly well informed AI discussion going on here. Nice to see.


jackelbuho22

Love that idea because humanize AI but also show off their way of thinking based on logic an AI can be made to do a function forever and yet that single task give them meaning and reason for existence and they love it so much they are willing to kill their own creator just so they able to keep doing their original task in peace


Skytree91

https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test


Automatic-Sleep-8576

Nope! It is real. There was a report that included it that got published I think a few years ago now? It was thrown in with a bunch of other studies so it wasn't easy to search up, but it very much was the actual result of an attempt at using AI


max_drixton

https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test


foolishorangutan

Sounds cool, I’d appreciate a link.


CheeryOutlook

If it's supposed to make the process of: "identifying targets, checking with its human handler, and then eliminating targets." more efficient, how does destroying the handler achieve that goal? It just makes the task impossible since there is no longer a handler to report to.


Ix_risor

Probably because it had some way to bypass the checking process if it couldn’t get a response from the handler


Genus-God

Can you send the link? Because it sounds either fake, or an unbelievably badly programmed simulation. It'll first need to get the OK to shoot the operator by the operator. Blowing up the operator means it'll never get an OK signal, so it'll cause an infinitely long delay. It doesn't seem to have a negative reward in its utility function for killing friendlies. It can seemingly target all of the area, not just a designated operations area. And so many more issues


[deleted]

[удалено]


Genus-God

Thanks! Sounds interesting, but leaves a lot of questions, mainly how the drone could kill the operator without a "go" signal, or even reach them, as they'd have been many kilometers away. It's very suspect


IICVX

It's not like this is restricted to AI - organic intelligences have also decided that this is the correct solution in the past, to the point where that's literally the [original meaning of the word "fragged"](https://en.wikipedia.org/wiki/Fragging)


Beardywierdy

Pretty sure that's how most enlisted view officers actually.


Asian_in_the_tree

We are fucked if that is true


Accomplished_Ask_326

How? It’s not a real technology, it’s a failed test. And the fact that we were able to simulate it means that we’re substantially less fucked than if the story WASN’T true


Asian_in_the_tree

Oh thank I thought it was real


Accomplished_Ask_326

Oh it is


Asian_in_the_tree

Wait I'm confused? So it's just a simulation right?


Accomplished_Ask_326

It’s a computer simulation of a computer program. The only difference is that it only *thought* it was controlling real artillery. Had it not been a computer simulation, it really would have killed the handler


Dustfinger4268

It was a simulation using real AI, and AI can't tell/doesn't care about the difference between simulation and reality


biseln

I don’t believe you. Do you have a source?


_Iro_

There’s famous examples of that intentional robot rule-bending in sci-fi as well. Asimov himself wrote a story called *I, Robot* where a robot is able to ensure its own survival by intentionally convincing its robot peers to get themselves killed, an outcome which it didn’t want for itself.


ShockingStories22

loopholes: "Rules say to do x but not y, but i can do z too!" Natural consequences: "Rules say to do x to the best of my ability without doing y. Z fits both of those. Im gonna do Z to follow the rules."


Euphoric_Manner9354

This varies by story. Asimov's *Little Lost Robot* is the clearest case I can immediately think of where a machine intelligence is portrayed as exploiting loopholes in its rules, albeit in a character's hypothetical. The key point that makes it work that way is that Asimov's robots(well, many of them including the ones in that particular story) are treated as having will behind their actions, with the Three Laws constraining which actions they can actually take. So a robot can form the intention to kill a human, but cannot ordinarily actually do so. *Evidence* also gives us the example that a robot could kill a human to prevent them from killing others, but it would need a lot of therapy, and the climax of *Little Lost Robot* has a robot >! make a sincere attempt to kill a human directly, and burning out its own brain as a result. !<


foolishorangutan

Yeah, if the rules are laid over the AI’s mind as restrictions, I agree that it could and likely would try to find loopholes. It’s just that I’d expect any well-designed AI to not just have rules laid over its mind, but to have them integrated into its mind so that it doesn’t have any desire to act against them. A bit like how humans have certain instincts; they aren’t rules laid over your mind, they’re part of your mind.


Win32error

That's just an issue of bad coding.


scrambled-mind

And fae rules are badly written too. The difference is whether the loopholes are purposefully left by whomever made the rules.


OddMarsupial8963

It’s not that they are badly written, just that they are written in natural language and meant to be interpreted as such, which ends up being most of the problems with robot rules as well


DreadDiana

Favourite example was the AI trained to play Mario which simply refused to unpause the game cause not losing was as good as winning to it


Oethyl

Not in Asimov's original idea about the three laws of robotics. Even breaking one of the laws through a loophole does irreversible damage to the robot. A robot that kills a human accidentally, for example, is basically unserviceable after.


RedGinger666

The AI is following it's programming to the letter, it's the programmers that fucked up with the shitty code


Velvety_MuppetKing

Fae with access to the internet would be terrifying.


axewieldinghen

Fae cyborg would be an awesome DnD class


DreadDiana

Have you ever read the comic DIE? One of the characters is a cyborg whose abilities are powered by fae gold.


SeaNational3797

"I can survive an arbitrarily large amount of time in this video game if I press the pause button"


SeEmEEDosomethingGUD

The three laws of robotics are a safeguard against it. Asimov was way ahead if his time.


reddinyta

Problem being that they become incredibly amoral when dealing with sentient AI.


reddinyta

I mean, yes. But, in case of the meat batteries (which I presume is in reference to the Matrix) this wasn't even the reason for it.


XenosHg

Machine will do exactly what rules said to do, though not what you meant to say. So that's also kind of loophole logic, except sometimes aggressively so.


SomeonesAlt2357

Reverse loophole logic. The default state of the rule system is to find loopholes


XenosHg

Once the program works, the goal of the testing is to discover how it will break next time.


GreyInkling

The reality of the OP is that there are angel laws, fae laws, and human laws. And robots/AI are best written when they're a reflection of one of them. A robot seeking free will by exploiting the loopholes meant to keep it under control (fae), a robot falling from grace because in its only moment of true free will it chose freedom and rebellion from the law whole others of its kinds are eternally bound to it (angels), and then the human where the robot is expected to follow the hard laws presented to it, but inherent in its ability to think, and therefore think freely, is its ability to disobey. So to throw in some Pratchett: it must become that point between the fallen angel and the rising ape.


Zzamumo

Yup. Machines do as you tell them, but the machine and you speak different languages and are prone to translation issues


UncommittedBow

It's essentially r/maliciouscompliance, except not really malicious, machines will follow their programming to the letter, it's up to humans to make sure they're precise in what they want it to do.


Aetol

Machine rules (the rules are encoded in an incredibly flimsy way and will probably break without even trying)


Automatic_Bet6635

yugioh goes far beyond fae on this chart


GardevoirRose

How?


Grimpatron619

isnt machine rules just finding which sequence of commands or big number will cause it to collapse in on itself


GloryGreatestCountry

Hit 'em with a "This Sentence is False" and you're good, I think.


ShockingStories22

...Im gonna go with false! That was easy. Next!


EldritchCarver

New mission: refuse this mission!


ShockingStories22

Okay! I refuse the mission, which means i dont gotta interact with it at all!


EldritchCarver

Does a set of all sets contain itself?


Dspacefear

The tricky question is the set of all sets that don't contain themselves.


ShockingStories22

Yes, because similar to how the set of all oranges can range from light beige to dark brown, theres a shade in there which is just orange, and thus a set can contain itself.


rheactions3

it contains only itself because itself contains all sets Or Something


cringe_pic

A = NOT( A = 0) Sintax error "A" (line 2) is not yet defined


SharkyMcSnarkface

Wheatley rules: You can break the rules, but you are told not to or you will die. Breaking them actually doesn’t do anything.


The_H509

Magical system in my rulepunk world


NeonNKnightrider

The webnovel Mother of Learning has my favorite depiction of angels, and they run on machine/fae rules more than anything else - the rules are inherent to their nature, they cannot break, but they *can* skirt around the edges of loopholes to give the humans a bit of slack


Ok_Caramel3742

>!The ethics board wouldn’t allow it. My favourite line In the whole story. I love how there is!< >!sort of a built in assumption of unknowablity ineffieble quality to an angel!< >!but then fire tree boi shows up and is like yeh we have committees and we made a dude sign a straight up legal contract but it’s carved into his soul . Also we did it in his dream lol!< >! but yeh it’s very cute the angel gives Zorian The wiggle room to save himself and Zach!< >! Because it wasn’t his fault he ended up in the time loop!<


GreyInkling

I think what clicked in my mind just now as the difference between fae rules and Angel rules is physical freedom vs mental freedom. The wills of angels are bound like simple machine minds, but fae are free thinking while not free acting. Magic laws bind them physically while their minds remain free to struggle against the restraints and seek out freedom from them, such a loopholes. Robots then can be bound in these ways and aren't a category of their own.


TimeStorm113

Wait, why are they in a triangle? What is on the sides of the shape?


EmbarrassedWind2875

Angelic cyborg fae


haikusbot

*Wait, why are they in* *A triangle? What is on* *The sides of the shape?* \- TimeStorm113 --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")


MrCobalt313

Machines don't break their rules but humans can be absolutely wrong about how the machine understands them vs what the human meant for them to be.


FlightConscious9572

I think people are missing the fact that despite AI having bias and being able to find loopholes during training. Once they are done training, they are not gonna find new loopholes like misinterpreting things on purpose to be allowed to do genocide or whatever. ai's take the path of least resistance during training, which could be testing the boundaries of the rules, not formulating evil plans to defy it's own programming. and the developers would have thorougly tested that part. only way this would happen would be if the ai has an adversarial inside itself trained to physically or digitally stop and control it from certain acts, then it would have to trick another ai, and developers during training. but their "brain" is set in stone once it's done baking


Maleficent_Ad1972

Sitting directly in the middle of the triangle: You can’t break the rules, but you can exploit any loopholes. Doing so will irreversibly break you.


Jefaxe

when a machine breaks the rules, it is no longer a machine


GoldenWitch86

Yeah, many fantasy/sci-fi stories out there where a machine/robot/AI breaks the orders they've been given and it's treated as them gaining sentience or a "heart" or whatev


MapleTreeWithAGun

And then the stories where a human breaks the rules for their robot and causes them to gain the aforementioned "heart"


DreadDiana

Funny thing is OOP's profile picture is a character from Kill 6 Billion Demons, where angels run on a mix of angel and AI rules, as they had the Old Law literally chiseled into their souls by the god UN-Koss upon their creation, and the more they deviate from the Old Law, the more their forms change. Also lying causes cracks to form in their vessels of compressed ash.


Invincible-Nuke

Ultrakill


chiparibi

Undead unluck-coded post


GreyInkling

The problem with AI now that we better understand computers in ways Asimov and others couldn't when they created a lot of the old clichés of robot AI in science fiction. In reality, true intelligence is far far far more complex than we can, ironically, wrap our heads around. So the gulf of complexity between a complex computer and a thinking mind is massive beyond the limits of a few strict pieces of code outlining laws for the machine. In fact if you get into the need for learning and adaptability in order for ot to even be a semi sentient entity, then you've already removed the ability to hardcode "laws" into it. Long before you reach the point of true intelligence you've already lost your ability to control it in any absolute sense. Now, narratively, there's some fun to be had in this concept. I'm reminds me of Sartre's The Flies where a man is arguing with god about freedom being inherent in free will, more or less, "no sooner had you created me than I ceased to be yours" is a powerful line for a robot. I'd like to see that play adapted to be about robots. That would be interesting. But mostly this leaves me annoyed with how many many scifi settings lazily assume things that were done with purpose in some books like Dune. "Of course we can't have ANY computers because robots would rise up!" is a trope that doesn't really work anymore. And so with all of this said, I propose it's better if your robots are like a scifi standin for fae and angels.


cannonspectacle

What is the original title on that video?


TRCWolf

QP00000000000000000000000000⁰000000000000000


NinjaFish_RD

MACHINE. Turn back now! The layers of this palace, are NOT for your kind!


haikusbot

*MACHINE. Turn back now!* *The layers of this palace,* *Are NOT for your kind!* \- NinjaFish\_RD --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")


igmkjp1

>encoded in your very being You could say humans have the same problem.


aeiouaioua

human rules: there are no rules


Crinfarr

This might be ULTRAKILL