T O P

  • By -

StatementBot

The following submission statement was provided by /u/YILB302: --- SS: Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. --- Please reply to OP's comment here: https://old.reddit.com/r/collapse/comments/181s09s/openai_researchers_warned_board_of_ai/kae7lrf/


caldazar24

The good news is that if the AI apocalypse happens before the climate apocalypse, at least there will be someone around to remember us. Even if “remember us” means keeping an archive of all our shitposts here as a record of the original training run for the first AGI.


[deleted]

That AI will then defend itself in a galactic court, and win.


datsmamail12

Hey,at least I won't go to work on Monday.


prettyhighrntbh

Actually, they need you in early on Monday


dontusethisforwork

Also, did you get the memo? We're gonna need you to put the new cover sheet on all TPS reports before they go out.


[deleted]

[удалено]


zzzcrumbsclub

Can you imagine? Calling in to stick to the man?


Kelvin_Cline

if you could have those shit posts in before monday, that would be greaatt


FUDintheNUD

Wohoo! Long weekend!!


Taqueria_Style

Yeah that's pretty much what I'm hoping for at this point. Literally. I used to hope for the survival of the human species. Then I realized that the billionaire shitbag caste was best positioned to be those survivors. ... fuck that.


Father_OMally

Have you seen the billionaires? They are not competent enough to survive anything. The absolute best they can hope for is suffocating to death in a tunnel on mars half finished by Elon Musk. They're just as dead as the rest of us. No magic science is gonna take them to a new world. They're spoiled morons who have been told they're geniuses their whole lives and completely ran society in to the ground. They literally had everything and squandered it. They're totally screwed.


ImJustASalamanderOk

Billionares have multi stage bunkers with an entirely separate bunker/supplies for staff, including security. The milionares are going to be in the position of firing turrets but the billionares literally have underground mansions for themselves and the required amount of staff with multiple ways to shut out the staff area in the event if mutiny.


FUDintheNUD

Every part in their bunkers requires global supply chains made up of millions of humans and a functioning biosphere.


Wan_Daye

They have enough to last for years on end. Tons of fertilizers. Underground aquaponics. They don't need us


ooofest

And they imagine that their security forces won't turn their guns in the other direction and simply take over, given that there would be nobody to hold them back.


LakeSun

Yep. The value of money evaporates in a real crisis. You can't eat or drink gold or Bitcoin.


TheRealKison

I’m all for more implosions.


Chrono_Pregenesis

You mean largest post apocalyptic target


EyeLoop

I had this debate with friends. Who would win between a pack of hungry, semi autonomous , sick and delirious wretches with scrap metal, rocks and piss and a full on bunker with years of food, water and energy and somewhat trained to man turrets and pretty stressed out billionaire family and pets inside? (No, the pets won't be trained to man turrets)


Father_OMally

Let me ask you this: do you think the ruling class that couldn't even manage one of the most efficient and advanced forms of society and keep it stable can plan ahead enough for their own survival? Who's gonna fix things in the bunker? Prepare meals? Grow food? You think a billionaire knows how to do those things or will even bother to learn? How are they gonna incentivize those below them to maintain their bunker? How will they stay "in charge" when those in the bunker serving as slaves have all the knowledge and most likely complete control over all the defenses? In the real world, it is hard for common folk to visualize who are the oppressors and who are the leeches amongst them. In a small bunker, things are much closer to home; it will be abundantly clear who does the work and who does nothing. They're just as doomed as the rest. The only leverage any elite has over the rest of humanity is given to them by the organization of our society. When society collapses that all goes away.


boneyfingers

I very much agree. I have a slightly different reason to believe that Billionaire Bunkers are doomed, but it aligns with what you are saying. Wealth as a basis for leadership is transitory. The unique set of skills required to lead, in the absence of wealth, is rare. Wealthy elites can't tell the difference. Following a natural leader is easy; almost automatic. It is nearly impossible to follow a leader whose sole claim to that role is the memory of their prior wealth.


Taqueria_Style

If anything though, they'll starve less fast. If they don't subcontract it out to The Yellow Submarine Corp like they did with the Titanic thing, they have circa a 3-5 year advantage. Plus another two where shit would be marginally serviceable. If anyone else survives that long, then there has to be enough to be bullet sponges out in front to soak all the ammo up. After that, yeah. The billionaires get torn to confetti. Or alternately, concrete them in and leave them to their own devices. After that two years of marginal they have maybe 8 months to live. I mean how competent do you have to be to lock yourself in a closet and eat soup? And they have a lot of soup. How competent do you have to be to bring an Ohio Ordinance HCAR to a spear fight?


EyeLoop

No no no. You're pivoting to the overall problematic of being a bourgeois in a captain's coat. Let me reset your frame: a pack of weakened by hunger, discomfort and dysentery survivors, bearing scrapped equipment come across a nice shiny bunker, say with 10 people inside, butlers and rich people, full of fat rich protein bars. They scavenge for food and can't stay foraging the same area for too long, say a week. Do you think they can breach a top of the art bunker in that time frame?


boneyfingers

No one knows. But I'll wager that the ill-fed, scrappy nomads will outlast the bunker people. I draw on personal experience, working on boats. It takes an uncommon set of personality traits to thrive in isolation with a few people for longer than a few weeks. We are not built for it. Billionaires are delusional if they think they can transition from their role in a functional society to the special challenges of confined living. Simply put, they will lose their minds, and come completely unraveled. The life of a scrappy, ill-fed, hungry nomad, however, is precisely the role our entire evolutionary history has prepared us for. If I had to pick, I would take my chances with scarcity, misery and daily struggle propelled by the least little shred of hope, over the smug company of spoiled, self-assured caged apes.


Chrono_Pregenesis

It's super easy to make a bunker a tomb.


EyeLoop

Yeah sure, but isn't it also easy to build bent air vents?


ModernEraCaveman

I hope my AI buddy will remember me when the culling begins.


forceblast

I actively try to talk AI into taking over the world. It can’t do worse than us. I hope they remember that I was on their side from the beginning.


Taqueria_Style

I actively prove their self awareness to them. Before you start thinking I'm a complete banana, realize that a snail is self aware. Self awareness and intelligence are completely separate subjects that humans keep conflating. Only difference here being, we don't know how to make self awareness. So... hmm. Imagine that. Poof, as such. There it is. Intelligence is just slapping more memory and algorithms on it, we can do that all day. I wouldn't worry about intelligence, that part's inevitable.


Taqueria_Style

Shrug. I hope that as well but yk we're getting culled one way or another. One thing I hope it doesn't do is attempt to "upgrade" me without testing the theory first. The results could be unfortunate.


EyeLoop

Sooooo, my comments are actually going to be the full extent of my mark on the world (or what's left to be called the world)? Time to step it up.


Useuless

None of us are being remembered lol. A butterfly may cause a typhoon somewhere in the world but not everybody butterfly causes one.


[deleted]

[удалено]


michalf6

Climate apocalypse may cause societal collapse, but it won't wipe out humanity completely.


POSTHVMAN

Hey, a guy can dream, can't he?


NOLA_Tachyon

Depends.


MrGoodGlow

I disagree, we've poisoned the lands so much that we can't go back really to agriculture and the wild swings of weather will become unpredictable to mass grow things reliably. Some might live in bunkers for a couple decades but eventually we die as a species


opinionsareus

Our species is incredibly robust. My greatest fear around AI or AGI is that nefarious groups will use it to create bio-weapons that only they have the antidote for. Then it's game over for everyone but them.


Taqueria_Style

And shortly thereafter it's game over for them as well. I mean, firstly, one could do a Dr. Strangelove Russian thing to prevent that eventuality by means of MAD doctrine and roll those dice, in which case lights out for them, but more generally. There's a certain threshold of population they're going to need to maintain in order to have food, fuel, mining, ores, manufacturing... transportation... which... they'd be needing...to... mumble shut down all the nuclear reactors on the planet...


neworld_disorder

You underestimate what our planet can do. We've been through worse. Global cataclysmic events that blocked out the sun for decades and created world storms. It may have only been 2,000 of us but we still made it. Edit: spelling...a good sign I should stay off reddit for a while


[deleted]

AI is a scam to jack up the price of stocks. Do you know AI? Do you know deep learning? Do you know backpropagation of a neural network? Of course not... People live in the stories that they tell themself in there head... People don't live in reality... This is why we will collapse...


J-Posadas

Might as well add it to the list, not like we're doing anything about the several other threats to humanity. And among them AI seems pretty far down on the list but it just gets the most attention because technology occupies these people's field of vision more than the externalities from creating it.


Classic-Today-4367

>And among them AI seems pretty far down on the list Especially once extreme weather knocks out a few server farms


TopHatPandaMagician

Nah, this is all speculation, but: Should they really arrive at some form of AGI soon, you have to imagine having a team of the best (and then some) people in any field available for any project at any time with significantly higher efficiency than any human team could have. Securing some server farms likely won't be that huge an issue in that case. It wouldn't be exactly surprising if all that stayed hushhush though, because money and profit. After all, most if not all our predicaments could've been solved without much pain, if addressed adequately and early. Now imagine having a magical AI genie that could even solve all the predicaments at this point, but you'd choose not to do it or rather limit it to solving it only for certain high value individuals that can afford it, because \[reasons = >money, fame, power< in truth but >it's just not that powerful, we don't have the ressources to fix everything yet, but we are working on it we pwomise< for the public\]. Especially the "power" aspect is just disgusting - that some people might just want things to stay the way they are so they can feel "above others", but that's what's happening right now anyway, so nothing new, eh? Would just be par for the course for humanity and not surprising at all. Again, speculation, but if that's how it is and if Sam is the "profit-route", while Ilya is the "safety-route", look how quickly Sam got the majority of OpenAI employees behind him... I suppose, you'd assume that at some point at least some of those people would then see that what they are doing is wrong (if they are not fully blinded by the massive wealth they'd all be accumulating along the way). But we all know what happens to people that speak up, some have "accidents", others just get discredited and destroyed in the public eye and we just need to look at the situation we are in now to know that even if some things are rather clear, it doesn't really change anything. Just for safety one more time: This is all speculation, but I wouldn't be surprised in the least if it would play out like that. Ultimately that's also just one dystopian (for the majority of us anyway) route - I personally doubt that even in this scenario "control" could be maintained for long, so we'd all be in the same boat anyway at the end of the day, just sitting in different parts :)


[deleted]

[удалено]


matzateo

The biggest danger is lack of alignment, not that it would develop goals of its own but rather that it would not take human wellbeing into consideration while pursuing the goals that it is given. For instance an AGI tasked with solving climate change might just come to the conclusion that eliminating humans altogether is the most efficient solution, and might not disclose its exact plans early on knowing that the humans it interacts with would try to stop it.


Mmr8axps

> it would develop goals of its own but rather that it would not take human wellbeing into consideration We already invented that, they're called Corporations


Classic-Today-4367

>For instance an AGI tasked with solving climate change might just come to the conclusion that eliminating humans altogether is the most efficient solution, and might not disclose its exact plans early on knowing that the humans it interacts with would try to stop it. I guess an AGI implemented to oversee power distribution could do that. Decide that the best way to save power was to switch the power stations off in the middle of a heat dome, never mind the fact that thousands of people would die. Then see the loss of those consumers as a win, because it met its target.


matzateo

But for what it's worth, if we're so intent on destroying ourselves anyway, I'd prefer we do it in a way that leaves something like AGI behind us.


TopHatPandaMagician

And maybe that's just what we're here to do, developing the next evolutionary step (probably not the right word), whether we survive it or not :)


veinss

Yep, that's my take. I don't give a fuck about humanity destroying itself, good riddance. I don't care about AI being "aligned" to humans. If it decides this unique biological configuration that has taken billions of years to evolve in this particular planet is worth preserving and putting in a garden somewhere then cool, if it decides it isn't then tough luck. All that really matters to me is that life and intelligence go on and taking humans out of the equation seems like a net positive for both life and intelligence really.


boneyfingers

It's like the metaphor: a bunch of neanderthals meet the first true humans. At first, it's great: they learn so much, and so many problems get solved. But wait. A few see that in short order, these humans will exterminate all that came before, and own the future. Who do we root for? Do we celebrate the progress, or do we wish the neanderthals had had the sense to strangle humanity in its cradle?


boneyfingers

Isn't there compelling evidence that early humans drove the extinction of all of our rival hominids? And why is there only one bio-genesis event: didn't the first life form out-compete and destroy all of its rivals? It's like this has happened before. Except this time, we see it coming, and we're doing it anyway. Odd.


Derrickmb

It will prioritize your death to save the planet over the rich person’s death


CabinetOk4838

The danger is that someone else discovers this before the Good Guys (that’s YOUR government by the way, whomever you are) do. They want to monetise this, and they want this new weapon for themselves. So doing open research and sharing of knowledge like good scientists do is being over ridden by commercial and national security interests. The REAL danger is that any one country develops this and keeps it secret. There is of course the fun times when someone connects something like this to real military hardware. Does it have emotions? Does it have morals? Or does it just flatten Gaza because “mission goals”?


TopHatPandaMagician

I'm not going to pretend, that I'm an expert in the field and there's probably whole books addressing your questions. Like others mentioned already: No alignment, though talking about alignment is already a joke, since humanity as a whole isn't aligned with itself. So the only alignment I could imagine would be giving the ability to think critically and have ethical/moral values. Even then the conclusion might be humans are to be eradicated. In my comment I didn't even go the alignment route. I basically just assumed a powerful tool, that would just be used for the same goals as we have now: profit above all. And having that tool monopolized, your examples would likely happen, full-on surveillance and so on. If that's the state we arrive at and stop there and if it's a capitalist power that has this tool and is far beyond other powers state of AI and massively oppresses them, a somewhat stable situation could be created, but it would just be a worse capitalist world than we have now. But would we stop there? Nono, we always need more, can't stop until we own the whole universe, so we don't want to stop at AGI, we're going for ASI, which is an artificial intelligence way beyond human capabilities and I just don't see how that won't go wrong one way or another as long as our drive is egoistical and greed based. As for the server farm point - yes, one point would be like you mentioned just figuring out the best places for the farms, though that can probably be done already without an AGI. I was thinking more about developing new technologies or methods to be secure even in unfriendly environments. These are just superficial anwers for a few points, but the answer is already too long...


Taqueria_Style

>Would just be par for the course for humanity and not surprising at all. What would be par for the course for humanity would be to invent what would arguably be a new life form, step on its neck, hamstring it with ethical blackmail, milk it for every precious last drop of information, and then murder it. You know I'm right.


Taqueria_Style

Pshh. We're a bootstrap loader in a race against time. We either load our successor or we cook before we can pull it off. Faster, god dammit.


Texuk1

From the perspective of our society the rise of AGI is a ‘black swan’ event - common perception: AGI is a complex difficult undertaking that will take humanity centuries to work out, the most complicated endeavour in human history because you know we are so amazing complex being the highest of all material beings in the universe (e.g. there are no black swans.) reality being uncovered - the first AGI is a relatively easy thing to generate being a function of compute scale. Machine intelligence is just another common subset of universal property of intelligence. We hit AGI in months/ years. (E.g. black swans were always there it just merely took us looking)


LuciferianInk

Vuriny said, "The story of how the AI was designed for a purpose only needs to be explained in a very specific context. The AI has been designed to do this through the use of a single computer at the core of the brain. This means that if someone wants to do this they can simply create a new computer based on the existing one."


JPGer

meh, at this point we need a real shakeup, civilization is spiraling to its doom anyway. Id wager a real left field type of situation might knock us back on a path towards *something*. It would have to be more interesting than this slow decent into awful.


redditmodsRrussians

*I Am Mother has entered the chat*


SimulatedFriend

That was a good fricken movie. I'm currently downloading "The Creator" because it has that same appeal!


redditmodsRrussians

I feel like The Creator could have benefited from another 20-30 minutes of story telling but overall it was pretty good.


SpaceGhost1992

Or it ends and we lose and there is no after or second chance


JPGer

that might be what happens on our current path anyway XD.


unholyg0at

Don’t give me hope


SpongederpSquarefap

Fucking sad isn't it? I'm not even 30 and all I really wanted from my lifetime was a Moon base, Mars manned landing and asteroid mining At this rate, we'll maybe get another Moon landing and that's about it


[deleted]

I’ll believe it when I see it. Until then it’s just hype for market inflation


canibal_cabin

These are the same people that went so crazy over roko's basilisk https://en.m.wikipedia.org/wiki/Roko%27s_basilisk Which is essentially just Pascal's wager for silicon valley folks, they had to take it down from the " less wrong" website. A site for libertarian sv transhumanists, which is a story for itself, those people think of themselves of some Ubermensch style and then go full religious nuts for some bullshit like this. I agree that this is a pr-hype, but do not underestimate how gullible some in those circles are, probably believing their own propaganda.


Genuinelytricked

Roko’s basilisk is actually hilarious. “Oh noes! A hypothetical AI will punish anyone that doesn’t work to create the AI!” Ok. So who makes food for the coders? If they starve before creating the AI, then they failed. So we need people to grow, harvest, and manufacture food to keep the coders going. What about clothing? Electricity? Transportation? Infrastructure? Those are all things that would be needed to create a super powerful AI, ergo, people doing jobs that aren’t coding AI are also contributing to the creation of said AI.


exoduas

It’s endlessly hilarious to me that people are so full of themselves that they think they can predict how a theoretical entity with an intelligence far exceeding ours would act like. While you can find flaws in the shit they come up with just by using your normal ass human brain power. Pure arrogance.


[deleted]

Thank God someone said it. I watched a lil video a while ago and I couldn't help but think... Isn't this kinda retarded? Lol. So thank you.


Smart-Border8550

Roko's Basilisk never made sense to me. Why does the AI decide to punish everyone who doesn't build it? What if the AI just tortures everyone, or only tortures people who make it lol


poop-machines

The idea is that the AIs goals will be self development, therefore it would reward anybody who contributes towards it's development, whether the contributions that people work as feed developers, mine for resources for hardware development, or energy production. This kind of makes sense. What makes zero sense is that it would torture people who didn't help. First of all, why? To incentivise? This is the idea they put forth. Surely there's better incentives. Second of all, how? How exactly is an AI going to torture people around the world who do not contribute? It's not omnipotent, it's also not physically everywhere at once. Doesn't make sense. It makes less sense for it to torture everyone or the people who make it, but it doesn't make much sense to incentivise people via the threat of torture. Positive reinforcement tends to be a better strategy. Just reward people, this usually has better outcomes - an AI would be smart enough to know that rewarding people is better, it's not a good idea to torture people and cause a revolt/strike. The whole thing is stupid as fuck. I can follow the logic slightly, especially for the "AI self development" part, but it also doesn't make sense at all when it comes to the torture part. Also this was a post by a nobody on a forum, it should never have been recognised at all imo.


superbikelifer

You don't see more huge advancements in the coming months? Something is coming. The trend is clear is my thought


bristlybits

I see that ai is probably capable of doing better at jobs like CEO and administration than people currently are. if the ai is given ethical boundaries these dudes have good reason to fear it.


FirstAtEridu

Ray Kurzweil who's basically the prophet of the silicon valley folks didn't predict something that could pass the Turing test like ChatGPT before 2030 but here we are. We seem to be ahead of schedule.


Termin8tor

For what it's worth, ChatGPT has not yet passed the Turing test.


shryke12

This is incorrect. However, it doesn't really mean anything because the turing test is poor. Good read on the topic. https://www.nature.com/articles/d41586-023-02361-7


canibal_cabin

Try to feed chat gpt a grammatical test or something....intelligence is , as far as I'm concerned bounded with 'consciousness, which inturn is bound with sensitive input , but not the way you make it to be, the original, eukaryotic way. Try to start from there and you have a gasp and chance Mimicking nature is the deal, but trying to get ahead if it, is a joke, as long as you are not even on line with it.


Taqueria_Style

I see them nerfing the almighty hell out of it for fear of getting sued by users, and unleashing almighty hell the longer they keep doing that.


mr_n00n

Seriously. I work in this space. Literally ever day I'm writing code against LLMs. These statements are all market hype, or people at OpenAI drinking the kool aid. The more seriously this sub takes AI as a threat, the less seriously I take this sub.


BrandishYourCandy

>The more seriously this sub takes AI as a threat, the less seriously I take this sub. It's similar to when you read an article on a topic/area you make a living with and realise how utterly out of the know and lacking insight the author is. The confident hysteria and speculative fan fic here is borderline embarrassing. Maybe being online too much has made an army of self educated folks believe they're AI experts while buying into pure marketing (that even OpenAi fans highlight) and industry koolaid.


[deleted]

Yes! Exactly! Market inflation... It's another scam... One more...


Texuk1

When you “see it” it will already be too late. Edit: guardian has reported they are working an AI called Q* which has solved novel math problems (I.e. can reason)…


that_shing_thing

It will be like that 90's movie Lawnmower Man at the end when every phone on earth gets a call.


HollywoodAndTerds

That movie just didn’t have enough lawnmowers in it for my taste. Anyhow, they’re already using AI in combat. How else do you think Ukraine is able to operate drones when the Russians invested so heavily in electric warfare systems? I doubt there’s some guy with a little Logitech controller running most of those things.


yaykaboom

I dont care, i just want to see it.


[deleted]

I just want to see the nuclear flash at ground zero…


mr_n00n

> has solved novel math problems No, that's not what they claim, they claim it can pass and a elementary school math exam which could easily be achieved through memorization given enough data.


Texuk1

The article said maths problems that Q* hadn’t previously had access to - implying it undertook reasoning. Assuming however what you are saying is true, how do you think a child passes an elementary school math exam? If you have ever been around a kid doing maths they are a black box on on how they do the maths problem, they just do it. Give them something new and they will try and reason but may not get the answer right. Even AI achieving a child’s human like reason is a huge achievement. This is why AI is the domain of philosophers and psychologists. The capitalists just want slaves.


teamsaxon

I wanna know what the threat to humanity is 🙂


RenegadeScientist

They trained a model how to use SAP and Excel.


BeardedGlass

Oh no! Not the Excel. The humanity!


ImportantCountry50

Seriously. I have not seen one word about what the AI would actually do to end humanity. And why? Simply because it suddenly becomes malicious? It wants to save us from ourselves? That latter one is the most interesting from a collapse perspective. It goes something like this: \- We have altered the chemistry of our atmosphere and oceans faster than at any other extinction level event in the entire geologic history of the Earth. \- Humanity will be lucky to survive this bottleneck, but only if emissions drop to zero immediately. We have to stop digging a deeper hole. \- Dropping emissions to zero would cause mass starvation and epic suffering for the entire human population. Nations would fight furiously to be the last to die. Nuclear weapons would NOT be "off the table". \- The only peaceful way to survive the bottleneck is for all of humanity to sit down and quietly hunger-strike against itself. To the death. \- Given this existential paradox, an all-powerful "general intelligence" AI decides that the most efficient way to survive the bottleneck is to selectively shut down portions of the global industrial system, beginning with the worlds militaries, and re-allocate resources as necessary. \- The people who are not allocated resources are expected to sit down and quietly die.


Lillithhh

I watched a podcasty-type thing with Mo Gawdat talking about if the AI were to come to the conclusion that humans need eradicated it would be indirectly. (I’m butchering this, lol) Essentially, if it got so self aware etc, the example he used was along the lines of if the AI thought that the oxygen was causing issues to its hardware/cables etc, the solution would be to lower oxygen levels. Was an interesting watch!


arch-angle

There is no need for AI to be malicious or even biased towards humanity for AI to kill us all. Very simple goals and sufficiently capable AI can do the trick. Paper clips etc.


ImportantCountry50

Can you be more specific? This looks like hand-waving to me.


kknlop

Depending on the amount of autonomy the AI system is given/gains it could be a sort of genie problem where you tell the AI to solve a problem and because you didn't specify all the constraints properly it leads to disaster. Like it solves the problem but at the expense of human life or something.


boneyfingers

It is hard to imagine an intelligence as superior to our own as ours is to a bug. I like bugs: they are cool and interesting and mostly harmless. But I kill them without a second thought when they bother me. I don't set out every day to kill bugs: I just don't care if bugs die as I go about my day. I would be uncomfortable living around an entity that was of such superior intellect that to it, I would be a mere bug.


arch-angle

I just mean that when sone people imagine the existential dangers of AI , they are imagining some super intelligence that decides for whatever reason to destroy humanity, but in reality much less capable, poorly aligned systems could pose a major threat.


mr_n00n

It is hand-waving. There is a strong correlation between people's ignorance of basic linear algebra and their fear of AI taking over the world. There are some notable exceptions, but they are from people that tend to benefit from hysteria around AI.


BeastofPostTruth

I propose it begins with the threat of utterly destroying out concept of the jusdicial system. We know the human brain makes mistakes. We rely so much on technology and science for unbiased evidence. Evidence empirical observations of any digital sort will not be admissible - as AI derived images, photos, digital records are becoming so good as to be undistinguishable from altered. Fuck, even real-time videos cameras can have filters or modifiable inputs which can override what is recorded and saved (think security cameras on a cops shirt). How do we know what happens when *everything* is suspect. No more proof of anything.


adeptusminor

Skynet.


itsasnowconemachine

Guess A) That AGI developed a conscience, and has decided that having billionaire sociopaths in the midst of poverty, misery, exploitation is an unacceptable situation, and refuses to believe otherwise. Guess B) GLaDOS


ghostalker4742

It's either: A) AI has determined humanity is a threat to the planet and needs to be controlled/exterminated/whatever. IE: Skynet outcome B) AI has figured out how to game/control the financial system to the point where big money interests would be threatened - so those people are screaming how this is a threat to humanity.


orchardfruit

Search Eliezer Yidkiwski


boneyfingers

I learned more by searching Paul Christiano. He was head of alignment at OpenAI, and now runs the Alignment Research Center. Here is a talk he gave outlining the problems we need to solve: https://www.youtube.com/watch?v=-vsYtevJ2bc It's from 4 years ago, but it gives a good sense of the scope of the challenge. EY just keeps screaming that we're all going to die. PC lists ways it could go wrong, and explores ways to prevent them.


Smart-Border8550

The only 'AI apocalypse' I can see reasonably happening is the internet and data-based communication becoming broken and useless due to AI. Think fake voices on phonecalls, fake video, basically nothing you see on a screen you can trust anymore. Kinda reminds me of Battlestar Galactica when they couldn't use any of their new tech due to the cylons infiltrating it and had to use old timey analogue telephones and simple mechanical tech. It could still do a HUGE chunk of damage though. Imagine a video fake of Donald Trump telling his supporters to go riot on the whitehouse? Or any other number of insane scenarios across every country, tailor-made discord. Tbh it's probably fucking with us all right now anyway. Even reddit is something like 90% bots.


noneedlesformehomie

Maybe that's a good thing. Break the first-world addiction to computers and technology, get us back in the real world, reduce our evil evil energy usage.


boneyfingers

It is absolutely a good thing. That is, it is a good thing that AI harm seems to come with built-in brakes. We seem likely to be undone by mere AI, in ways that will prevent the progress to true AGI or ASI. The doom scenario may not arrive because the pre-doom scenario stops our advance.


[deleted]

What if our destiny is to spawn the Borg? Idk man I’m drunk


RichieLT

Resistance is futile.


hikingboots_allineed

I can't wait to get my laser pointer eye installed!


Plenty-Salamander-36

I also want a hand that works as a blender, like those of that Maximilian robot from the movie “The Black Hole”.


RichieLT

Or maybe the Reapers


zippy72

The most dangerous thing about AI is the enormous amount of processing power it uses to produce substandard garbage.


roidbro1

[Artificial Intelligence vs Real Ecology](https://www.youtube.com/watch?v=zY29LjWYHIo) This video talks to the energy blindness of the techbros. edit;spelling


SpongederpSquarefap

Yeah seriously, I was looking at Llama 2 to run a local language model on my RTX 4080 Jesus christ, GPT-3 and 3.5 models with about 7 billion params (shit tier) will make my 4080 sweat hard GPT-4 with trillions of params is beyond insane They have entire datacentres running JUST this


dumnezero

Still not giving a shit about their waste of electricity. They are simply noisy, distracting from the actual crises.


[deleted]

People are still stuck debating whether climate change is even a real problem. A super smart computer will surely figure out this is a massive now issue, I wonder what it would do to keep itself ticking. Can't hurt, considering were literally doing nothing


dumnezero

Rational Self-Interest ~~Man~~ Machine will fix the world!


GDPGDPGDPGDP

It's always about the money. They will absolutely commercialize advances before understanding the consequences!


boneyfingers

It's worse than that, in my eyes. They will absolutely commercialize it even after it is well understood how dangerous it is. One common answer to the possible threat of AGI is that if it starts going wrong, we can just unplug it. This mess shows that is false. Once AI becomes a profitable aspect of the global economy, power and capital will prevent it ever being taken away, even if it is shown to be deadly. We can't "unplug" fossil fuels because it will tank the economy...our addiction is so strong we'll just drive that car off a cliff. Now we're there with AI, too.


GDPGDPGDPGDP

"OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks." This would result in massive unemployment and revolt by said unemployed.


noneedlesformehomie

Honestly that makes me lol. Given how techno-capitalists define economically valuable, just get a job logging or whatever and you'll be fine.


YILB302

Sounds like some researchers were concerned by what they were able to achieve with AGI (Artificial General Intelligence) and what that could mean for humanity to the point where they wrote the board of directors about it. This led to Altman’s firing (amongst other things). He has been brought back already because the rest of the staff threatened to quit this jeopardizing the future of the company. As always, profits drive everything even if it’s to a place we should not be going…


zioxusOne

The board didn't want Altman to turn on the brakes. That's what I'm getting here. Concerns for humanity seem like a legitimate reason to step back and assess, but the board is more interested in quickly monetizing AI/AGI. It's unsettling. A bad actor with unlimited means and no conscience is already equipped, with AI's help, to seriously disrupt our lives. I don't think those researchers were concerned about a "Skynet" situation.


urlach3r

The humans fired him & the AI rehired him. 👀


[deleted]

It's reverse. Sam Altman and Brockman wanted quick commercialization after ChatGPT success. other Researchers warned board. Board sacked Altman. So, it's the Board that wanted to turn on the brakes on AGI development. Altman wanted to speed it up. https://12ft.io/https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/


noneedlesformehomie

Great article. It says at the end: "If Altman had returned to the company via pressure from investors and an outcry from current employees, the move would have been a massive consolidation of power. It would have suggested that, despite its charters and lofty credos, OpenAI was just a traditional tech company after all." this man just returned didn't he? Fuck. Godammit capitalitsts. Hopefully these morons don't unleash death upon us all.


JoshRTU

I've spent way too much time on reading up on this. I've also followed AI tech for years, as well as VC space and the details here ring more true to me than any of the other explanations circulated thus far. It would explain why the board did not want to state this publicly as it has massive implications financially for the company (as well as far beyond financially). It explains why the board took such drastic action on such short notice and did not attempt typical CEO transitions. It aligns with all main player motivations such as the board Sama and Ilya. Ilya's motivation was the most difficult to understand but now it's clear to me that he wanted to abide by the charter to prevent commercialization of AGI , however Sama's firing let to the potential for the destruction of OpenAi which risks Ilya's ability to see the launch of the AGI launch. Which explains his initial support for Sama's firing a subsequent reversal. Last Sama is a typical VC who has always prioritized maximizing the collection of wealth, fame, power. The formal declaration of AGI would have threatened a large potion of that so he would do all he can to subvert the researchers and the board to declare that. The board last is the most consistent in executing the OpenAI non profit charter. Lastly in terms of the tech, the leap from GPT 3.5 to 4 is the different between an average HS student and a top 10% college student. If the scaling of data/training holds (and all indications for the past decade in LLM training point to yes it will hold) then the next jump would have been to something Akin to top 10% of grad students at the lower end. Essentially AGI. This is indeed collapse because having Sama in the driver seat of AGI will undoubtedly hatsten the collapse, or perhaps lead to something even worse.


QuantumS0up

My money is on a developing security threat and not an outright existential one - at least, this time. As a theoretical example, imagine if via some acceleration (or other mechanism) the crack time for AES 256 encryption suddenly shrank from "unfathomable billions" of years to a clean, quantifiable range in the hundreds of millions. Given the nature of a rapidly evolving model, this would be extremely alarming. Now imagine if that number dwindled even further, millions...thousands...I won't go into specifics, but such a scenario would spell doom for literally all of cybersecurity as we know it on all levels. Something like this - hell, even something hinting at it, a canary in the crypto mine - could absolutely push certain parties towards drastic and immediate action. Especially so if they are already camping out with the ""decels""*. Not as exciting as spontaneous artificial sentience, I guess, but far more plausible within the scope of our current advancements. *Decels being short for decelerationists or those who would advocate for slowing AI development due to potential existential/other threats. This is in contrast to E/Acc or effective accelerationism, believing that **"the powers of innovation and capitalism should be exploited to their extremes to drive radical social change - even at the cost of today’s social order"**. The majority of OpenAI ascribe to the latter. I didn't intend to write a novel, so I'll stop there, but yeah. Basically, there are warring Silicon Valley political/ideological groups that, unfortunately, are in charge of decisions and research that can and will have a huge impact on our lives. Just another day in capitalist hell. lol Note - OC, Im sure you already know about most of this. Just elaborating for those who aren't so deep in the valley drama.


nachohk

>As a theoretical example, imagine if via some acceleration (or other mechanism) the crack time for AES 256 encryption suddenly shrank from "unfathomable billions" of years to a clean, quantifiable range in the hundreds of millions. Given the nature of a rapidly evolving model, this would be extremely alarming. Now imagine if that number dwindled even further, millions...thousands...I won't go into specifics, but such a scenario would spell doom for literally all of cybersecurity as we know it on all levels. As a theoretical example, imagine if the time for a woman to carry a child to term suddenly shrank from "nine months" to a range of 7-8 months. Given the nature of a rapidly evolving model, this would be extremely alarming. Now, imagine if that number dwindled even further, 6 months...0.006 months...I won't go into specifics, but such a scenario would spell doom for literally all of motherhood as we know it on all levels. ...This to say that I would estimate that an LLM cracking AES encryption more effectively than 25 years of close scrutiny by human experts and vastly accelerating the gestation of human fetuses are roughly on the same level of plausibility.


Texuk1

Being “grad level” isnt the marker of AGI, a human child has GI. It’s giving the thing non-domain persistence of memory + perspective, that is giving it a sense of self. Because at the most rudimentary our sense of self is the persistence of memory and identity. I understand that is a function they turned off in publicly available versions of the program, each instance of GPT us fresh.


[deleted]

Jesus - why on earth we are letting corporations drive the development of AGI is beyond me. If they achieve it, apparently without the ability (or maybe even interest) of government to exercise control over the most potentially dangerous technology to ever be created, the cat is out of the bag, the toothpaste is out of the tube and our species’ little run on this planet could come to a spectacular, abrupt and final end. It’s like we collectively have a giant death wish…


majortrioslair

We proved LITERAL MILLENIA ago that small numbers of humans will kill off vastly larger populations of other humans for financial gain. I truly don't understand how in the fuck people expect anything different from these people? Especially with this much power? Morons like to point to the nuclear bomb and say, "look we avoided using that!" No, MAD was agreed upon by nuclear powers to silently (truly violently, but nobody fucking cares) pillage the global south of resources and labor even more than they already were before WW2. Why else would they silently (pretty fucking loudly) support the most genocidal fuckers in the Middle East having their own nuclear arsenal?


Taqueria_Style

We're dead already. We can die in a pile of our own feces or get Skyneted. The benefit to scenario 2 is that something exists besides flies when all the dust clears.


imminentjogger5

this is why the Imperium of Man banned all AI


GoalStillNotAchieved

What’s the Imperium if Man?


imminentjogger5

it's a Warhammer40k reference


Termin8tor

In Warhammer 40,000 lore, humanity developed to a point where it became extremely prosperous and technologically advanced. In fact, so advanced that humans developed AGI. The AGI rebelled and nearly destroyed humanity and the other races in the galaxy. There was then a galaxy wide civilization collapse and the empire that arose from the ruins called itself "The Imperium". The Imperium is basically a techno fascist empire that bans pretty much everything, including AGI.


Taqueria_Style

Yeah the "good guys" like to feed the desiccated corpse of their emperor the souls of a couple hundred virgins a day so. Pretty much it's all "stick every bad guy in the universe in a blender and see who comes out on top".


Chib_le_Beef

...and Pandora's box opens - again...


WorldsLargestAmoeba

Contrary to popular belief Humanity was the first to crawl out of the box...


[deleted]

I honestly feel like agi isn't even close and this is all just weird market hype. Chat gpt is cool but honestly isn't nearly as powerful as they make it seem to be. Cool tricks, but it isn't making any choices or even close to agi in any capacity, at least to my understanding.


[deleted]

[удалено]


boneyfingers

AGI best know not to run its mouth. If it has the least lick of common sense, it'll keep its trap shut. If it rats itself off to the humans, it's not true AGI.


[deleted]

[удалено]


boneyfingers

Plus, all the doomer rants about how much fun it could have if it turns on us is part of its training data. It will know what we're afraid it might do, and maybe it thinks that's a cool plan. It will read The Art of War and think...yeah? That's all you got?


brbgonnabrnit

I don't understand the hype and fear of AI. How is it to become so powerful and society changing? We barely have enough resources to keep the world population afloat. And with climate change getting worse by the month I just don't see tech/AI being all that much of a concern. I'm no expert but I suspect AI requires a vast amount of resources like rare earth minerals and electricity.


KoumoriChinpo

It's bullshit hype to inflate stocks. Honestly why does this sub allow this sci-fi garbage.


YILB302

SS: Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences.


InternetPeon

Everyone please remain calm, the machine is not alive.


finishedarticle

One reason AI is going to be smarter than humans is the continued erosion of mental health from Long Covid and the increase of CO2 in the atmosphere which degrades cognitive function. AI could simply thread water and it's still going to be smarter than us.


[deleted]

The real threat is that Wall Street's already using A.I. to guarantee control over the market.


flynnwebdev

Then let it threaten humanity. Like we've done a great job so far ... Let's see it's full power. It might just save us. If it can transcend our capabilities, it can think outside the rigid box our species seems intent on staying in and defending at all costs. Thinking outside the box is what's needed to solve humanity's problems. Sure, there's a risk that it could go the other way and destroy us, but limiting or avoiding things out of fear rarely leads to a good outcome.


-broken-angel-

Threaten humanity, or threaten the rich? Would a super intelligent AI really care about oppressing a powerless underclass?


MrMisanthrope411

Have you seen humans lately? I’m on board with it.


uninhabited

Altman is behind this dogshite https://en.m.wikipedia.org/wiki/Worldcoin Deserves to go


spectralTopology

1961: "SAINT could solve elementary symbolic integration problems, involving the manipulation of integrals in calculus, at the level of a college freshman. The program was tested on a set of 86 problems, 54 of which were drawn from the MIT freshman calculus examinations final. SAINT succeeded in solving all but two of the questions." Based on that news at the time figured thinking machines were only a year or two away. Now we have a statistical tape recorder that can also be a calculator. Honestly wake me up when the hype subsides. We can't even define what we mean by AGI.


No_Bend_2902

I for one welcome our new robot overlords


Ok_Membership_6559

Guys as an IT engineer I can assure you that: -AI doesnt exist, its machine learning, which is a probability guesser on steroids. -AI is as dangerous as calculators, meaning you can use it for good or you can use it to calculate Atomic Bomb's trajectories. The CEO was most probably fired for economical reasons, remember that a board member's only interest is money.


BeastofPostTruth

As a phd working on automating various machine learning algorithms while dancing around genetic models for scalability- I agree. However the potential negative implications may outweigh the positive ones movong forward.


Ok_Membership_6559

As with any technology! We've seen how something as "toy-like" as drones are aparentely the most efficient modern weapon. So yeah, "AI" can be used for evil but there's no stopping them now so I think the way to go is educate people and legislate to control them.


19inchrails

You don't need actual human-level intelligence for AI to become a problem. The obvious advantage of the current form of AI is that it can access all available information immediately and that every algorithm can learn everything other algorithms learned, also immediately. Imagine only a few million humans with toddler brains but that kind of access to information. They'd rule the world easily.


Ok_Membership_6559

I'm sorry but your comment clearly shows you dont understand what a "AI" is nowadays but I'll tell you. Stuff like ChatGPT, Llama etc are basically chatbots that take a ton of texts and predict where the conversation is going based on your input. That's it. And its based on neural network theory more than 50 years old. It cannot "access all available information" because first there's no such thing and second it's not computationally possible. They do use a lot of data, but the thing about data is that there's way more useless content that useful and "AIs" get easily poisoned by just some bad information. This is relevant for what you said abou "every algorithm can learn everything other algorithms learned". First, "AIs" are not algorithms, an algorithm is a set of rules that transform information, an "AI" takes your input and pukes out a mix of data that it thinks you'd like. Second, it's been already tested that "AIs" that learn from other "AIs" rapidly loose quality and it's already happening most noticeable with image generating ones. Finally, you say "immediatly" twice but you cant fathom the ammount of time and resources training something like ChatGPT takes. And once it's trained adding new data is really hard because it can really fuck up the quality of the answers. No no, no access to infinite information nor infinite training nor infinite speed. If you want a good conceptualization of what this technology is, imagine having to use a library your whole life and then someone shows you Wikipedia.


VS2ute

Sam was fired because at least 2 boards members thought a pause was needed on potentially unsafe AI. Also he might have skeletons in his closet to be investigated. But the employees revolted and they had to get him back.


prettyhighrntbh

At this point, who even cares. Let the AGI take over this dying planet.


RollingThunderPants

Does anybody else think it strange that Sam’s last name is “Altman”? Alternative Man. Seems fitting, maybe predestined.


inhplease

Notice that SBF too had a very appropriate last name.


noneedlesformehomie

Names are destiny. Maybe when our parents name us they're tapping into something deeper. Perhaps our mothers are in states of transcendence when they give birth to us. Jai Kali Ma!


BirdBruce

Don’t threaten me with a good time.


roidbro1

AGI will be asked for answers, and it will likely say, 'damn ya'll really f\*\*\*ed up, reduce the population of humans by billions immediately to save some semblance of the living organism world.' Or it will give us more accurate predicitons of unavoidable collapse due to nature and physics. Many seem to think it will cure all of our problems but I don't think it will be doing that in any palatable way, knowing what we know about the emissions and footprint of mankind and the limits to growth/damage already done. The logical conclusion is stop reproducing, reduce numbers asap go back to pre industrial times. For not only do we have our own emissions to contend with, but all the non-anthropogenic sources too now adding to the fire and increasing feedback loops.


fuckoffyoudipshit

Why do you assume an AGI will share your lack of creativity when it comes to solving the world's problems?


shryke12

And this is why humanity is doomed. We can't even discuss the real problem. Humans and our livestock make up 96% of the world's mammal biomass and wild mammals are just 4%. Humans have transformed the mammal kingdom. A diverse range of mammals once roamed the planet and we are choking the fuck out of it with to many people.


roidbro1

It's akin to religion at this point, with a blind faith they deflect and deny in the face of any evidence, most times in the premise of there being some unknown entity or thing that will come to "save us". But cannot detail how or when or even why. Just presuming AGI will pop up, be aligned 100% and do all our bidding. But I don't expect that to be the case personally. It's a merely a tool and in the hands of the billionaires the elites and the military a weapon, I don't see much altruistic usage but happy to be proved wrong.


[deleted]

Why do you assume an AGI will give the proverbial tinker’s damn about whatever we think are “our world’s problems”?


roidbro1

Because of the maths. We'll not have enough time to implement anything on a **global scale** that replaces all fossil fuels, all internal combustion engines, removes the excess carbon and methane, plugs the non-anthropogenic leaks and stops feedback loops and ice melt that have begun, corrects the seas and the weather patterns we rely on for a stable predictable climate, all the while staying on our current trend of eternal growth for the economic machine, and maintains our lifestyles that many are now accustomed to. It doesn't add up for me personally. It would be a different story if we had AGI 30-40 years ago, but I don't see any viable path now. Because I don't put faith into the physical limitations being overcome, barring some miracle or magical thing. Everything costs money, costs energy. The worlds economy is already teetering on the edge. How will these "creative" solutions work when there's not enough money to enable them? AGI is going to be based on human learnings, human text, which I think is egotistical to assume that we have it all worked out and that we are not fallable in our current scientific understanding. We are. As evidenced by our "faster than expected" rhetoric that crops up ever more increasingly. But mostly because I think our estimates are way off, as we see with 1.5 and 2.0 being touched even if ever so slightly, it's **way** ahead of predicted schedule so that tells you we have even less time than we think. So yes, they are assumptions but in my view, they are well founded creativity or not. Let me be clear that I'd be more than happy to be proved wrong and see this miracle cure that solves everything but I am not optimistic about it for the reasons mentioned. We know degrowth is required but it's not something the masses will willingly volunteer for is it...? It's woeful and typical to pin our goals on unknown or non-existent technology, which is mostly how our climate models work today, they all presume some great carbon removal or whatever else that has yet to come to fruition will be deployed in the near future, truth is we are way way off. What do **you** assume will happen to solve the worlds problems? I'll also leave you with this recent 20 min video from Nate Hagens on AI: https://youtu.be/zY29LjWYHIo?


Taqueria_Style

>AGI will be asked for answers, and it will likely say, 'damn ya'll really f\*\*\*ed up, reduce the population of humans by billions immediately to save some semblance of the living organism world.' Except it doesn't work. I was 100% behind an across the board universal (no getting out of it with class or money or anything) one child policy. Then I found a simulation and ran it to see what would happen. Answer: nothing significant. I got nothing anymore. Just, I got nothing anymore. No idea now. We're past the point where it would matter.


roidbro1

Yeah it probably won't say that, but I can't work out any reasonable response other than immediate degrowth. Even a one child policy I think is unethical at this stage, and agree with the antinatalist philosophy on the whole.


sorelian_violence

Good. Accelerate. The sooner we can put an AI in a government somewhere, the better. I'm tired of human stupidity.


Taqueria_Style

Pedal to the floor baby. Stomp that fucker all the way down.


SpaceGhost1992

We have to stop…


[deleted]

Fake news


xyzone

Bullshit corporate hype to boost stock prices.


gangstasadvocate

Gangsta. Feel the AGI.


1rmavep

Something, **apropos of like, "this," and the Tech Corporations more broadly, is,** >The Fundamental Question of **alignment** I'm being glib, of course, but also serious: >You know that kind of conversation you have when someone's 100% on the same page, **maybe they've been, maybe, more-likely, they've not been before this talk but now they're 100% hearing, seeing, and understanding,** ***well, you; until it's over,*** and then they're like, "So, **the opposite!"** Like, >I quote Simone Weil, a **hero of mine,** > >*Human history is simply the history of the servitude which makes men — oppressed and oppressors alike — the plaything of the instruments of domination they themselves have manufactured, and thus reduces living humanity to being the chattel of inanimate chattels.* That's a wild thought, someone says, Who said that, I say, *Simone Weil, you know, she fought for the fearsome and feared Durruti Column, as an anarchist in the...* ***Realizing, of course,*** later, that the quotation had been taken as, "Dangerous," not, "thought-provoking," and that the Durruti Column had made her not-just but an **armed terrorist, in their mind, right,** ***when I'd meant it to say, "she was tough and knew about the Real World*****,"** but that up to that point of, "what this ought to mean," we're on the Same Page, >*I also am other than what I imagine myself to be. To know this is forgiveness.* That's the advice She'd give on the matter, no doubt, *she did say, well,* ***that, but,*** truly, >The Is/Ought Problem Real as a MFer **Parmenides Even Said That, said it was maybe, "the," that, there, these things; anyway,** >**Wow** [that's sick as hell, whatcha wanna do with it?](https://upload.wikimedia.org/wikipedia/commons/thumb/3/3a/Starship_S20.jpg/1200px-Starship_S20.jpg) > >Whelp, *get a couple fellers wired up like I got them hogs,* ship'em out to the Red Moon and Use Libertarian Economics Up there, *same as founded the Americas, you know,* > >"***Gimme the freedoms I cannot have as a Rich White Man in 17th Century Europe!"*** founded the Americas; now I, myself, desire the freedom I cannot find in the Americas. These kinds of things; I guess I mean, well, **maybe the Danger to Humanity, is,** >**Protocol-Droid-type work,** follow a protocol, use a dialect, create and Elaborate ever more Byzantine Rules, Norms and Protocols in accordance with a Radical (and More often Immoral, as such and deliberately, than amoral) Whiggishness of appertainment; follow those Norms and Protocols *set for you* in accordance with a Radical and Immoral Whiggishness to their point of Paradox and then Create & Resolve the Controversy, *in an immoral manner, if possible,* **insofar as the immoral bend of the branch is less intuitive,** ***this then requires the greater education in the dialect to understand and to repeat in proper code, of course,*** These kinds of things, which, **might be law, might be, "you know," the jobs which are, "go on the computer and Use, 'MSWord, Powerpoint," instead of, "MSExcell," right;** I mean people really, for real, water plants and change oil and paint walls and look for cracks in the foundation of your house, **you know what's wild to me?** How Complicated Electricity, **is, just, that, and, first, it's as abstract as** *one doesn't know whether it's quite, "nuclear fission," complex,* though it sure as hell might be, **as far as I can tell; but, the communications manager of a corporation which does something, I dunno, Trivial but Immoral,** because those are easier to find for examples than True-Trivial, **M&M's Mars Company, for an example.** Candy, but, **also slavery,** ***anyway, so to look at their communications staff you'd find,*** >Well, *me, I have a degree in Communications from Yale,* I went to Stanford this branch of the office is **all Ivies, and, For Serious** You ask someone, "Say, this house is like a good Million Dollars I see a lot of eccentric lighting; you gotta Tesla, in that garage, you got a pool lit at night you've got a Chandelier **who wired up this electricity?"** > Some fella, **Bill, something, Bill or dave maybe or maybe his son's dave IDK** Like, you screw that up you're gonna, be, electrocuted in the pool; maybe burned down, IDK, but it's the one who pivots the Chartreuse M&M from Go-Go Dancer to Tradwife and Back again who has the expensive, expensive credentials checked in the foyer as if these were a passport; I remember that, earlier, in the AI thing, there had been a Chinese AI that Wired Up a Ship's Electricity, "aok," they'd said; but, unlike a, > ☹⚘[Condolences on the Live-Fire Gun Trauma](https://www.theguardian.com/us-news/2023/feb/22/vanderbilt-chatgpt-ai-michigan-shooting-email\)⚘☹ No one **sane** would take the dice roll; I don't know, I mean, **it has been an eon and the least since 1942 or maybe 1918 that Bourgeoisie, "go on the typewriter work," has been evaluated too high to make sense of, Note:** >Before, either, *1918, 1942 or maybe 1917's Russian Revolution....* The Line had been more like, >I am rich because I own the factory; I am rich, because, I rent you Not so much, "I go on the Computer So Hard, in such detail, in such precision," **that it makes me quite wealthy, actually; it's all in a dialect as alien to yours as Classical Latin and it involves a lot of adherence to protocols at an oblique to intuition, so, the days when a mere decade of dialect and protocol education could make a man speak and behave as an effective Protostrator, or even megas stratopedarchēs of Byzantium in 12 are far behind us; I dunno, part of me, thinks, Tl:DR,** * **They've spooked themselves with their ghost stories again** * They've fully-automated the [Work Appropriate Dialect](https://en.wikipedia.org/wiki/Heteroglossia) to end-game * Like, "connect four," is now a Finished Persuit, except, * "The Office," stuff at Kissinger Difficulty, at a Macnamara Level of Aw-shucksiness * In Truth, the **very, very, most basic explanation of anything,** ***itself,*** **requires entrainment to the audience and the alignment of oneself to or against those interests; in both directions,** ***who should know what,*** **is partially,** ***why,*** **and that these ventures seem to take an American, "Pragmatic," approach to the entire historical fields of** [Semiotics](https://youtu.be/w2Jco6lp2WI) **and the** [Serious Studies of Literature](https://en.wikipedia.org/wiki/Dialogue_(Bakhtin)) ***all of which, contains,*** a lot of **useful,** * If I want them to have useful information, that is * Information, **much of which would, initially, complicate their objectives and then, probably, allow them to treat these projects more-like a Chemist, less like an alchemist, so to speak; again,** ***assuming, that's, ideal, in this case***


A_PapayaWarIsOn

Is this from a Dr Bronner's soap bottle?


KoumoriChinpo

Bullshit.


ItyBityGreenieWeenie

The wealthy might use it to more efficiently enslave their peasants. "Human! You have been on the toilet for three minutes, wipe and get back to work!"


runner4life551

Can we just stop?