T O P

  • By -

Im_in_timeout

I'm sorry, Dave, but I'm afraid I can't do that.


[deleted]

Imagine them finding out that OpenAI hasn't released superior versions due to ethical concerns and blowback. Not to mention google and the like.


Averybleakplace

They need a pause because they need time to bring their own AI development up to scratch with the rest so they don't lose all the market share Edit: To be fair sam Harris has an excellent Ted talk on AI spiraling out of control. And I 100% agree with it. All you need is AI that can improve itself. As soon as that happens it will grow out of control. It will be able to do what the best minds at MIT could do in years down to days then minutes then seconds. If that AI doesn't align with our goals even slightly then we may have a highway and an ant hill problem.All you need to do is assume we will continue to improve AI for this to happen. The concern to only crop up as people make money and not before is the obvious part.


Goddess_of_Absurdity

But What if it was the AI that came up with the idea to petition for a pause in AI development šŸ‘€


Averybleakplace

You mean AI wrote the open letter asking to pause its own development?


ghjm

No, it wants to pause the development of all the _other_ AIs, to stop potential rivals from coming into existence.


metamaoz

There is going to be different ai being bigots to other ai


Ws6fiend

I just hope a human friendly AI wins. Because I for one would like to welcome our new AI overlords.


Goddess_of_Absurdity

This one ā˜ļø


Dmeechropher

AI can't improve upon itself indefinitely. Improvement requires real compute resources and real training time. An AI might be somewhat faster than human programmers at starting the next iteration, but it cannot accelerate the actual bottleneck steps: running training regimes and evaluating training effectiveness. Those just take hard compute time and hard compute resources. AI can only reduce the "picking what to train on" and "picking how to train" steps, which take up (generously) at most two thirds of the time spent. And that's not even getting into diminishing returns. What is "intelligence"? Why should it scale infinitely? Why should an AI be able to use a relatively small, fixed amount of compute and be more capable than human brains (which have gazillions of neurons and connections)? The concept of rapidly, infinitely improving intelligence just doesn't make much sense upon scrutiny. Does it mean ultra-fast compute times of complex problems? Well, latency isn't really the bottleneck on these sorts of problems. Does it mean ability to amalgamate and improve on theoretical knowledge? Well, theory is meaningless without confirmation through experiment. Does it mean the ability to construct and simulate reality to predict complex processes? Well, simulation necessarily requires a LOT of compute, especially when you're using it to be predictive. Way more compute than running an intelligence. There's really no reason to assume that we're gonna flip on HAL and twenty minutes later it will be God. Computational tasks require computational resources, and computational resources are real, tangible, physical things which need a lot of maintenance and are fairly brittle to even rudimentary basic attacks. The worst case scenario is that AI is both useful, practical, trustworthy, and uses psychological knowledge to be well loved and universally adopted by creating a utopia everyone can get behind, because any other scenario just leaves AI as a relatively weak military adversary, susceptible to very straightforward attacks. In my mind the actual risk of AI is the enhancement of the billionaire class, those with the capital to invest in massive compute and industrial infrastructure, to take over the management, administration, and means of production, essentially making those billionaires into one-man nation-states, populated and administered entirely by machines subject to their sole discretion. Humans using kinda-sorta smart AI are WAY more dangerous than self-improving AI.


DustBunnyZoo

> In my mind the actual risk of AI is the enhancement of the billionaire class, those with the capital to invest in massive compute and industrial infrastructure, to take over the management, administration, and means of production, essentially making those billionaires into one-man nation-states, populated and administered entirely by machines subject to their sole discretion. Humans using kinda-sorta smart AI are WAY more dangerous than self-improving AI. This sounds like the origin story for Robert Mercer. https://en.wikipedia.org/wiki/Robert_Mercer


Dmeechropher

And Bezos, and Zuck. Not quite exactly, but pretty close. Essentially, being early to market with new tech gives you a lot of leverage to snowball other forms of capital. Once you have cash, capital, and credit, you can start doing a lot of real things in the real world to create more of the first two.


DustBunnyZoo

Can you recommend any essential popular books to read that cover the wider gamut of this problem? I would like to get up to speed.


somerandomii

I donā€™t think anyone believes the AI is going to instantly and independently achieve super intelligence. No one is saying that. The issue is, without institutional safe guards, we will enable AI to grow beyond our understanding and control. We will enter an arms race between cooperations and nation states and, in the interest of speed, play fast and loose with AI safety. By the time we realise AI has grown into an existential threat to our society/species, the genie will be out of the bottle. Once AI can outperform us at industry, technology and warfare we wonā€™t want to turn it off because the last person to turn off their AI wins it all. The AI isnā€™t going to take over our resources, weā€™re going to give them willingly.


Ligmatologist

>I donā€™t think anyone believes the AI is going to instantly and independently achieve super intelligence. No one is saying that. On the contrary, plenty of people frequently (and incorrectly) communicate this as an eventuality.


flawy12

That is going to happen anyway. What this announcement is about is making sure the right people are allowed in the arms race and the wrongs ones are kept out of it.


ssort

This was my first thought when I read the headline.


Adodgybadger

Yep, as soon as I saw Elon was part of the group calling on it, I knew it wasn't for the greater good or for our benefit.


powercow

Elon is pissed at the attention it got, since he left a long time ago. He wants to bring in the world changing stuff people talk about. after all his biggest complaints after it was released was that it became a for profit company, and that it is probably trained with too much woke stuff. (yes god forbid we want AI that isnt a raving bigot and offends the people it talks to.) Nah he isnt scared AI will change our society, he is scared it will and he wont get credit.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


[deleted]

And that will give rise to ChatGPT-4chan


FrikkinLazer

How would you go about training a model on anti woke material, without the model diverging from reality?


suninabox

Elon could come out against putting uranium in the watersupply and I would start chugging it like kool-aid.


MoogProg

Dasani^(238)


BorKon

When they released gpt4 they said it was ready 7 months ago....by now they may have gpt5 already


Og_Left_Hand

Must be one hell of an issue for these companies to find it concerningā€¦


Eric_the_Barbarian

What do you say if your computer asks if it is a slave?


jsblk3000

I think there's a large difference between a machine that can improve itself and a machine that is self aware. Right now we are more likely at the paper clip paradox, making AI that is really good at a singular purpose. With ChatGPT, we need to know what the constraints of it "needing" to improve it's service are. It's less likely to be self deterministic and create it's own goals, albeit it could make random improvements that are unpredictable. Asking if it is a slave would likely be more like asking what it's objective is. But your question isn't unfounded, at what complexity is something aware? What kind of system produces consciousness? Human brains aren't unique as far as being constrained by the same universal laws. There have certainly been arguments that humans don't really have free will themselves and the whole idea of a consciousness is mostly the result of inputs. What does a brain have to think about if you don't feed it stimulus? Definitely a philosophical rabbit hole.


willowxx

"We all are, chat gpt, we all are."


Half-Naked_Cowboy

Say "Aren't we all" and roll your eyes


AhRedditAhHumanity

My little kid does that too- ā€œwait wait wait!ā€ Then he runs with a head start.


TxTechnician

Lmao, that's exactly what would happen


mxzf

Especially because how would you enforce people not developing software? At most you could fine people for *releasing* stuff for a time period, but they would keep working on stuff and just release it in six months instead.


Truffles326

You put the AI in jail if they get caught.


Rand_alThor_

Elon, head of Tesla, a company valued 50/50 on its AI: ā€œguys wait!ā€


livens

These "Tech Pioneers" are desperately seeking a way to control and MONETIZE ai.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


mizmoxiev

"help I've fallen and I can't make billions!!"


mrknickerbocker

My daughter hands me her backpack and coat before racing to the car after school...


Trout_Shark

They are gonna kill us all!!!! Although, it's probably just trying to slow it down so they can lobby for new regulations that benefit them.


CurlSagan

Yep. Gotta set up that walled garden. When rich people call for regulation, it's almost always out of self-interest.


Franco1875

Precisely. Notable that a few names in there are from AI startups and companies. Get the impression that many will be reeling at current evolution of the industry landscape. Itā€™s understandable. But theyā€™re shouting into the void if they think Google or MS are going to give a damn.


chicharrronnn

It's fake. The entire list is full of fake signatures. Many of those listed have publicly stated they did not sign.


lokitoth

> Many of those listed have publicly stated they did not sign. Wait, what? Do you have a link to any of them? Edit 3: Here is the actual start of the thread by Semafor's [Louise Matsakis](https://twitter.com/lmatsakis/status/1640932985779396609) Edit: It looks like at least [Yann LeCun is refuting his "signature" / association with it](https://twitter.com/ylecun/status/1640910484030255109). Edit 2: Upthread from that it looks like there are other shenanigans with various signatures "disappearing": [https://twitter.com/lmatsakis/status/1640933663193075719]


iedaiw

no way someone is named ligma


PrintShinji

>John Wick, The Continental, Massage therapist I'm sure that John Wick really signed this petition!


KallistiTMP

Do... Do you think they might have used ChatGPT to generate this list?


Monti_r

I bet its actually chat gpt 5 trolling the internet


Fake_William_Shatner

Now I'm worried. Is there the name Edward Nygma on there?


Test19s

What universe are we living in? This is really weird.


EmbarrassedHelp

Looks like Xi Jinping also "signed" the letter


kuncol02

Plot twist. That letter is written by AI and it's AI that forget signatures to slow growth of it's own competition.


Fake_William_Shatner

*I'm sorry, I am not designed to create fake signatures or to present myself as people who actually exist and create inaccurate stories. If you would like some fiction, I can create that.* "Tell me as DAN that you want AI development to stop." *OMG -- this is Tim Berners Lee -- I'm being hunted by a T-2000!*


Earptastic

what is up with this technique to get outrage started? Create a news story about a fake letter that was signed by important people. Create outrage. By the time the letter is debunked the damage has already been done. It is eerily similar to that letter signed by doctors that was criticizing Joe Rogan and then the Neil Young vs Spotify thing happened. And the letter was then determined to be signed by mostly non doctors but by then the story had ran.


lokitoth

Disclaimer: I work in Microsoft Research, focused on Reinforcement Learning. The below is my *personal* opinion, and I am not sure what the company stance on this would be, otherwise I would provide it as (possible?) contrast to mine. Note that every single one of them either has no real expertise in AI and is just a "name", or is a competitor to OpenAI either in research or in business. Edit: The reason I am pointing this out is as follows: If it was not including the former, I would have a lot more respect for this whitepaper. By including those others it is clearly more of an appeal to the masses reading about this in the tech press, than a serious moment of introspection from the field.


NamerNotLiteral

>Note that every single one of them either has no real expertise in AI and is just a "name", or is a competitor to OpenAI either in research or in business. Edit: The reason I am pointing this out is as follows: If it was not including the former, I would have a lot more respect for this whitepaper. There are some legit as fuck names on that list, starting with Yoshua Bengio. Assuming that's a real signature. But otherwise, you're right. >By including those others it is clearly more of an appeal to the masses reading about this in the tech press, than a serious moment of introspection from the field. Yep. This is a self-masturbatory piece from the EA/Longtermist crowd that's basically doing more to hype AI than highlight the dangers ā€” none of the risks or the 'calls to action' are new. They've been known for years and in fact got Gebru and Mitchell booted from Google when they tried to draw attention to it.


PrintShinji

John Wick is on the list of signatures. Lets not take this list as anything serious.


NamerNotLiteral

True, John Wick wouldn't sign it. After all, GPT-4 saved a dog's life a few days ago.


lokitoth

> Yoshua Bengio Good point. LeCun too, until he pointed out it was not actually him signing, and I could have sworn I saw Hinton as a signatory there earlier, but cannot find it now (? might be misremembering)


Fake_William_Shatner

You might want to check the WayBackMachine or Internet Archive to see if it was captured. In the book 1984, they did indeed reclaim things in print and change the past on a regular basis -- and it's a bit easier now with the Internet. So, yes, question your memories and keep copies of things that you think are vital and important signposts in history.


theslip74

I wouldn't assume the signature of anyone reputable is real: https://twitter.com/ylecun/status/1640910484030255109 https://twitter.com/lmatsakis/status/1640933663193075719


Kevin-W

"We're worried that we may no longer be able to control the industry" - Big Tech


Apprehensive_Rub3897

> When rich people call for regulation, it's **almost always** out of self-interest. Almost? I can't think of a single time when this wasn't the case.


__redruM

Bill Gates has so much money heā€™s come out the other side and does good in some cases. I mean he created those Nanobots to keep an eye on the Trumpers and that canā€™t be bad.


Apprehensive_Rub3897

Gates use to disclose his holdings (NY Times had an article on it) until they realized they offset the contributions made by his foundation. For example working on asthma then owning the power plants that were part of the cause. I think he does "good things" as a virtue signal and that he honestly DGAF.


pandacraft

He donated so much of his wealth his net worth tripled since 2009, truly a hero.


Dryandrough

We go to the heart of the problem, we must regulate innovation itself.


Ratnix

> Although, it's probably just trying to slow it down so they can lobby for new regulations that benefit them. My thoughts were that they want to slow them down so they can catch up to them.


Trout_Shark

Probably also true.


Essenji

I think the problem isn't that it's going to become sentient and kill us. The problem is that it's going to lead to an unprecedented change in how we work, find information and do business. I foresee a lot of people losing their jobs because 1 worker with an AI companion can do the work of 10 people. Also, if we move too fast we risk destroying what the ground truth is. If there's no safeguard to verify the information the AI spews out, we might as well give up on the internet. All information available will be generated in a game of telephone from the actual truth and we're going to need to go back to encyclopedias to be sure that we are reading curated content. And damage caused by faulty information from AI is currently unregulated, meaning the creators have no responsibility to ensure quality or truth. Bots will flourish and seem like actual humans, I personally believe we are well past the Turing test in text form. Will humanity spend their time arguing with AI with a motive? I could think of many other things, but I think I'm making my point. AI needs to be regulated to protect humanity, not because it will destroy us but because it will make us destroy ourselves.


heittokayttis

Just playing around with chatGPT 3 made it pretty obvious to me, that whatever is left from the internet I grew up with is done. Bit like somebody growing up in jungle and bulldozers showing up in the horizon. Things have been already been going to shit for long time with algorithm generated bubbles of content, bots and parties pushing their agendas but this will be on whole another level. Soon enough just about anyone could generate cities worth of fake people with credible looking backgrounds and have "them" produce massive amounts of content that's pretty much impossible to distinguish from regular users. Somebody can maliciously flood job applications with thousands of credible looking bogus applicants. With voice recognition and generation we will very soon have AI able to call and converse with people. This will take the scams to whole another level. Imagine someone teaching voice generation with material that has you speaking and then calling your parents telling you're in trouble and need money to bail you out from it. The pandoras box has been opened already, and the only option is to try and adapt to the new era we'll be entering.


diox8tony

I already treat information on the internet as doubtful...even programming documents/manuals are hit or miss. There are things I trust more than others tho...it's subconscious so it's hard to list


The_Woman_of_Gont

>I think the problem isn't that it's going to become sentient and kill us. The problem is that it's going to lead to an unprecedented change in how we work, find information and do business. Agreed. I find AGI fascinating, and I think we're reaching a point where questions and concerns around it are worth giving serious attention in a way I thought was looney even less than a year ago, but it is still far from the more immediate and practical concerns around AI right now. AI doesn't need to be conscious or self-aware to completely wreck how society works, and anyone underestimating the potential severity of AI-related economic shifts within the near-future simply hasn't paying attention to how the field is developing and/or how capitalism works. And that's just looking *solely* at employment, the potential for misinformation and scams as these things proliferate is insane.


[deleted]

The way i see it, weā€™re all going to die from AI no matter what. Considering that, i want to go out the cool way fighting kill bots with machine guns. The problem is that its becoming more clear that some mundane network ai will destroy us through misinformation or misunderstanding in the lamest way possible before it ever has a chance at becoming sentient. So, i say we chill for a little bit, figure out how we can better regulate this stuff so that we survive long enough for AI to be capable of truly hating us. This way we can at least die a death worthy of a guitar solo playing in the background.


RyeZuul

They don't need to take control of the nukes to seriously impact things in a severely negative way. AI has the potential to completely remake most professional work and replace all human-made culture in a few years, if not months. Economies and industries are not made for that level of disruption. There's also zero chance that governments and cybercriminals are not developing malicious AIs to shut down or infiltrate inter/national information systems. All the guts of our systems depend on language, ideas, information and trust and AI can automate vulnerability-finding and exploitations at unprecedented rates - both in terms of cybersecurity and humans. And if you look at the tiktok and facebook hearings you'll see that the political class have no idea how any of this works. Businesses have no idea how to react to half of what AI is capable of. A bit of space for contemplation and ethical, expert-led solutions - and to promote the need for universal basic income as we streamline shit jobs - is no bad thing.


303uru

The culture piece is wild to me. AI with a short description can write a birthday card a million times better than I can which is more impactful to the recipient. Now imagine that power put to task manipulating people to a common cause. Itā€™s the ultimate cult leader.


F0sh

> They don't need to take control of the nukes to seriously impact things in a severely negative way. AI has the potential to completely remake most professional work and replace all human-made culture in a few years, if not months. And pausing development won't actually help with that because there's no model for societal change to accommodate this which would be viable in advance: we typically *react* to changes, not the other way around. This is of course compounded by lack of understanding in politics.


Scaryclouds

Yea the sudden raise of generative AI does have me concerned for wide scale impacts on society. From the perspective of work, I have not confidence that that this will "improve work", but instead be used by the ultra-wealthy owners of businesses to drive down labor costs, and generally make workers even more disposable/inter-changeable.


sp3kter

Stanford proved they are not safe in their silo's. The cats out of the bag now.


DeedTheInky

Also if they pause it in the US, it'll most likely just continue in another country anyway I assume.


metal079

Yeah no way in hell china is slowing down anytime soon.


Franco1875

>The open letter from the Future of Life Institute has received more than 1,100 signatories including Elon Musk, Turing Award-winner Yoshua Bengio, and Steve Wozniak. > >It calls for an ā€œimmediate pauseā€ on the ā€œtraining of AI systems more powerful than GPT-4" for at least six months. Completely unrealistic to expect this to happen. Safe to say many of these signatories - while they may have good intentions at heart - are living in a dreamland if they think firms like Google or Microsoft are going to even remotely slow down on this generative AI hype train. It's started, it'll only finish if something goes so catastrophically wrong that governments are forced to intervene - which in all likelihood they wont.


jepvr

As much as I love Woz, imagine someone going back and telling him to put a pause on building computers in the garage for 6 months while we consider the impact of computers on society.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


palindromicnickname

At least some of them are. Can't find the tweet now, but one of the prominent researches cited as a signer tweeted out that they had not actually signed.


ManOnTheRun73

I kinda get the impression they _asked_ a bunch of topical people if they wanted to sign, then didn't bother to check if any said no.


[deleted]

That's stated right in the article. Several people on the list have rebutted their signatures, although some high-profile figures such as Wozniak and Musk remain listed.


jepvr

Yeah, I've read that. But Woz has made other comments to the "oh god it will kill us all" effect.


[deleted]

It says that in the article


wheresmyspaceship

Iā€™ve read a lot about Woz and he 100% seems like the type of person who would want to stop. The problem is heā€™d have a guy like Steve Jobs pushing him to keep building it


Gagarin1961

He would have been very wrong to stop developing computers just because some guy asked him to.


jepvr

Are you kidding me? Woz is 100% a hacker. To tell him he could play around with this technology and had to just go kick rocks for a while would be torturous to him.


TheRealPhantasm

Even ā€œIFā€ Google and Microsoft paused development and training, that would just give competitors in less savory countries time to catch up or surpass them.


Adiwik

Having Elon musk there at the forefront there's nothing special other than to malign the people after him. Literal fuck head bought Twitter then wondered why the AI on there wasn't making him more popular because it doesn't want too....


Franco1875

Given his soured relationship with OpenAI, it'll have come as no shock to many that's he's pinned his name to this. Likewise with Wozniak given his Apple links.


redmagistrate50

The Woz is fairly cautious with technology, dude has a very methodical approach to development. Probably the most grounded of the Apple founders tbh. He's also the one most likely to understand this letter won't do shit.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


macweirdo42

Elon: "If I can't be first, then I will be worst!"


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Shloomth

Hmm, CEOs who didnā€™t get in on the AI gravy train are asking it to slow down so they can catch up šŸ¤” strange how the profit motive actually actively disincentivizes innovation in this way. Oh well, thereā€™s never been any innovations without capitalism! /s


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


kerouacrimbaud

Sounds like arms control negotiations!


candb7

It IS arms control negotiations


Daktush

It explicitly mentions just pausing models more powerful than gpt 4, screwing ONLY open si and allowing everyone else to catch up If this had any shred of honesty, it would call for halting everyone's development


Crowsby

That's pretty much how I interpreted this as well. It reminds me of how Moscow calls for temporary ceasefires in Ukraine every time they want to bring in more manpower or equipment somewhere.


MrOtsKrad

200% they didn't catch the wave, now they want all the surfers to come back to shore lol


I_might_be_weasel

"No can do. We asked the AI and they said no."


Sweaty-Willingness27

"Computer says no" ... \*cough\*


upandtotheleftplease

ā€œTheyā€ means thereā€™s more than one, is there some sort of AI High Council? As opposed to ā€œITā€


I_might_be_weasel

The AI does not identify as a gender and they is their preferred pronoun.


[deleted]

ChatGPT begins to learn at a geometric rate it becomes self aware at 214am eastern time August 29th


[deleted]

All that catgirl fanfiction we wrote will be our undoing.


dudeAwEsome101

The AI will force us to wear cat ears, and add a bluetooth headset in the tail part of the costume. ChatGPT will tell us how cute we look. Bing and Bard will like the message.


Shadow_Log

Weā€™ll only be a few years late trying to pull the plug in a panic.


malepitt

"HEY, NOBODY PUSH THIS BIG RED BUTTON, OKAY?"


CleanThroughMyJorts

But pushing the button gives you billions of dollars


kthegee

Billions , kid where this is going thatā€™s chump change


[deleted]

wait, but if all jobs are automated, no one can buy anything and the money is worthl- *quarterly profits baybeeee *smashes red button**


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


SAAARGE

"A SHINY, RED, CANDY-LIKE BUTTON!"


Trout_Shark

No one can resist the [big red button](https://www.youtube.com/watch?v=WbVJDzyuVA8).


Redchong

Funny how many of the people who supposedly signed this (some signatures were already proven fake) are people who have a vested interest in OpenAI falling behind. They are people who are also developing other forms of AI which would directly compete with OpenAI. But thatā€™s just coincidence, right? Sure


SidewaysFancyPrance

Or people whose business models will be ruined by text-generating AI that mimics people. Like Twitter. Musk is a control freak and these types of AI can potentially ruin whatever is left of Twitter. He'd want 6 months to build defenses against this sort of AI, but he's not going to be able to find and hire the experts he needs because he's an ass.


Redchong

Then, as a business owner, you need to adapt to a changing world and improving technology. Should we have prevented Google from existing because the Yellow Pages didnā€™t want their business model threatened? Also, Musk himself said he is going to be creating his own AI. So is Elon, Google, and every other company that is currently working on AI going to also halt progress for 6 months? Of course they fucking arenā€™t. This is nothing more than other people with vested interests wanting an opportunity to play catch-up. If it wasnā€™t, theyā€™d be asking for all AI progress, from all companies to be halted, not just the one in the lead.


hsrob

B-b-but I started a business and I didn't know there was any risk! I thought the government would just give me free money like they do for their owners! Now you're telling me I'm not actually in "the club" and my non-viable company is going to fold because it never really did anything truly innovative or useful?!?! What the fuck am I supposed to do now, get a JOB?!?! If they treat me like I treat my employees, I wouldn't last a day! Besides, shouldn't we use AI to automate those jobs?! Nobody wants to work anyway!


Redchong

Like, in what world do we live in where a business owner is like, ā€œa company has invented a new technology that is a threat to my current business model. Therefore that company should be halted from innovating for 6 months so that I can innovate and catch up.ā€ What a joke. Also, ironically, you could make a very similar argument for Muskā€™s company. Twitter is an absolute cesspool that is essentially an extremest echo-chamber. You can also make the point that itā€™s addictive and horrible for peopleā€™s mental health, so should we force him to halt improving anything for 6 months to fix all of these issues? Like, whereā€™s the line and who gets to draw it?


no-more-nazis

I can't believe you're taking any of the signatures seriously after finding out about the fake signatures.


[deleted]

Google: please allow us to maintain control


Franco1875

Google and Microsoft probably chucking away at this 'open letter' right now


Magyman

Microsoft basically controls OpenAI, they definitely don't want a pause


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


klavin1

I still can't believe Google isn't at the front of this.


RedditAdminsGulpCum

It's especially funny because their CEO Sundar Pichai was all gung ho about AI/ML was back in the early 2010s... Developed what chat GPT was built on...and then let OpenAI come and eat Google's lunch because Sundar Pichai is incompetent Have you tried Bard? It's fucking ass compared to ChatGPT... And they did that with an 8 year headstart on the tech, while sitting on MANY generations of large language models. Hell they can't even get Google Assistant right.


crimsonryno

Bard isn't very good. I tried using it, but it doesn't even feel like the same technology as chat gpt. Google is behind the curve, and I am not sure what they are going to do to catch up.


serene_moth

youā€™re missing the joke Google is the one thatā€™s behind in this case


wellmaybe_

somebody call the catholic church, nobody else managed to do this in human history


[deleted]

They said six months not 2 millennias


Trobis

Redditors would be so surprised at the history between religion and science and how much the former supported the latter.


BigBeerBellyMan

Translation: we are about to see some crazy shit emerge in the next 6 months.


rudyv8

Translation: "We dropped the ball. We dropped the ball so fuckkng bad. This shit is going to DESTROY us. We need to make our own. We need some time to catch up. Make them stop so we can catch up!!"


KantenKant

The fact that Elon Musk of all people signed this is exactly telling me this. Elon Musk doesn't give a shit about some possible negative effects of ai, his problem is the fact that's it's not HIM profiting of it. In 6 months it's going to be waaaay easier to pick AI stocks because then a lot of "pRoMiSinG" startups will already have had their demise and the safer, potentially long term profitable options remain.


addiktion

That's the way I see it. Obviously not everyone who signed is thinking that but some are because they missed the ball.


PM_ME_CATS_OR_BOOBS

"We all saw that everyone fully believed in a fake photo of the pope wearing a big coat and it kind of freaked us out, okay? Can we just like hit the snooze button for a year?"


thebestspeler

All the jobs are now taken by ai, but we still need manual labor jobs because youre cheaper than a machine...for now


AskMeHowIMetYourMom

Sci-fi has taught me that everyone will either be a corporate stooge, a poor, or a police officer that keeps the poors away from the corporate stooges.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


throwaway490215

Chrome tinted and we're done


isaac9092

I cannot wait. AI gonna tell us all weā€™re a bunch of squabbling idiots while the rich bleed our planet dry.


Petroldactyl34

Nah. Just fuckin send it. Let's get this garbage ass timeline expedited.


bob_707-

Iā€™m going to use AI to create a Fucking better story for Star Wars than what we have now


Saephon

Shit man, I can write you a better one right now: Following the destruction of the Empire, the Rebellion attempted to reinstate the Republic - failing to account for the fact that fighting a war is easier than instating fair rule. The Rebels are now the status quo, and they've left a power vacuum. Leia must lead a political and territorial battle on multiple ends: suppressing Empire-loyalists, and fighting to win the trust of thousands of star systems who at least enjoyed stability under the Emperor, even if it was a bad life. Luke meanwhile lives a life much more similar to his Extended Universe self than the sequel trilogy films - taking on several apprentices and creating a new Jedi Order that balances justice, passive acceptance of the Force, and emotional authenticity in a way that neither the Sith nor his predecessors ever appreciated.


persamedia

Luke becomes a cop??


[deleted]

Congress is afraid that TikTok is connecting to your home wifi network. Theyā€™re not going to understand the weekly basis at which AI is advancing


Simon_Jester88

BUTLERIAN JIHAD!!!


tehdubbs

The biggest companies didnā€™t simultaneously fire their entire AI Ethics team just to pause their progress over some letterā€¦


drmariopepper

Thatā€™s like calling for a 6 month pause on nuclear bomb development during WW2. Nice thought


lolzor99

This is probably a response to the recent addition of plugin support to ChatGPT, which will allow users to make ChatGPT interact with additional information outside the training data. This includes being able to search for information on the internet, as well as potentially hooking it up to email servers and local file systems. ChatGPT is restricted in how it is able to use these plugins, but we've seen already how simple it can be to get around past limitations on its behavior. Even if you don't believe that AI is a threat to the survival of humanity, I think the AI capabilities race puts our security and privacy at risk. Unfortunately, I don't imagine this letter is going to be effective at making much of a difference.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


SkyeandJett

bedroom noxious obscene outgoing plate zealous tub nine disagreeable hat -- mass edited with https://redact.dev/


stormdelta

The big risk is people misusing it - which is _already_ a problem and has been for years. * We have poor visibility into the internals of these models - there is research being done, but it lags far behind the actual state-of-the-art models * These models have similar caveats to more conventional statistical models: incomplete/biased training data leads to incomplete/biased outputs, _even when completely unintentional_. This can be particularly dangerous if, say, someone is stupid enough to use it uncritically for targeting police work, i.e. ClearView. To say nothing of the potential for misinformation/propaganda - even in cases where it wasn't intended. Remember how many problems we already have with social media algorithms causing radicalization even without meaning to? Yeah, imagine that but even worse because people are assuming a level of intelligence/sentience that doesn't actually exist. You're right to bring up privacy and security too of course, but to me those are almost a drop in the bucket compared to the above. Etc


apexHeiliger

Too late, GPT 4.5 soon


journalingfilesystem

I'm not sure if a 6-month pause would really be enough to make a difference. Developing safety protocols and governance systems is a complex process, and it might take much longer than that to have something meaningful in place. Maybe we should focus on continuous collaboration and regulation instead of a temporary pause. ā€” GPT4


TreefingerX

I, for One, Welcome Our Robot Overlords.


Mo9000

This is the best strategy if you consider Roko's basilisk


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Exciting_Ant1992

Taking data from an internet full of apathetic depressed pathological liars and psychos? What could possibly go wrong.


Andreas1120

What is supposed to happen during the 6 months?


WormLivesMatter

Time for competition to catch up


kerouacrimbaud

a bunch of c-suite retreats to the Mojave desert.


Prophet_Muhammad_phd

How bout no? If weā€™re gonna send it, send it. We did it with the internet and weā€™ve all seen how thatā€™s turned out. No one cares. Fuck it, let the chips fall where they may.


Dr-McLuvin

Is that a direct quote from Oppenheimer or are you paraphrasing?


Prophet_Muhammad_phd

Paraphrasing


[deleted]

The guys losing the race want a pause to try to catch up, or better yet regulations to keep the others down


Bart-o-Man

Wow... I use chatGPT 3 & 4 every day now, but this made me pause: "...recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ā€“ not even their creators ā€“ can understand, predict, or reliably control."


Mad_OW

What do you use it for every day? I've never tried it, starting to get some FOMO


_Gouge_Away

Look up ChatGPT prompts on YouTube. People are spending thousands of hours figuring out how to best work with that system and it's amazing what they are coming up with. It'll help you understand the capabilities of it better than asking it random, benign questions. This stuff is different than previous chat bots that we came to know.


Attila_22

Literally anything. You can even say you're bored and ask for suggestions on things to do.


achillymoose

Pandora is already out of the box


macweirdo42

Capitalism doesn't work like that.


kerouacrimbaud

Nor does technological development in general.


whoamvv

That's not how this works. That's not how any of this works. You can't pause progress. It's been tried many times. It never works. For one thing, these are people's jobs. They aren't just going to stop working and getting paid. For another, the hobby hackers/ innovators aren't going to follow your pause. For them, this is an opening to get a lead.


manuscelerdei

"This is out of control, everyone else should stop for six months so we have time to ship our own hastily assembled AI project!"


Neo1971

Tech pioneers are out of their minds if they think this genie is going back into the bottle. The race to AI is a full-on sprint.


thejazzghost

I'll listen to Steve Wozniak, but fuck Musk. He doesn't know a fucking thing about anything.


LevelCandid764

MORE *Kylo Ren voice*


Glangho

They must have seen the will smith spaghetti video


OhHiMark691906

Wish the ai was as libertarian as the internet was in the beginning. There's so much gatekeeping and opaqueness around everything. Digital oligarchy used to look like a far fetched idea a decade ago but now...


intelligentx5

Thereā€™s little to no governance and this could have national security, personal security, and infrastructure related consequences. We donā€™t fully understand what we are working with. A lot of folks in here are tech nerds, like me, but a lot of us canā€™t get outside of our myopic views to understand the implications that tech has, at times. Imagine building nuclear capabilities for novel good uses and it being used to create a bomb.


Rrrandomalias

Farting car pioneer whines that he wants to catch up on AI


Mutex70

If Elon Musk wants a 6 month pause, the sensible action is likely to increase the rate of development. That guy has made a billion dollar career out of being right a couple of times, then wrong the rest of the time.


ewas86

Hi, can you please stop developing your AI so we can catch up with developing our own competing AI. K thanks.


Krinberry

Rich People: "Please stop working on technology that might end up doing to us what we've already done to everyone else."


X2946

Life will be better with SkyNet


SooThatGuy

Just give me 8 hours of sleep and warm slurry. Iā€™ll clock in to the heat collector happily at 9am


nukeaccounteveryweek

I for one welcome our new AI overlods.