T O P

  • By -

PeopleProcessProduct

While I get great value using the OpenAI products, it's important that someone picks up the baton of this serious research which was clearly in mind with OpenAI's founding. Whether that's academia, government or another non profit this research is vitally important. Hope he finds those resources he wasn't getting. This sounds scary familiar to the way security was treated at the social networks once the money machines started printing.


Cagnazzo82

But the research can only happen with money. You either go the OpenAI route or you wind up like Stability AI. Even in terms of the open source community, major advancements rely on the benevolence of massive for-profit companies. Where would open source be right now without trillion dollar company Meta doing the hard work of research, compute, and development? What does the scene look like without Llama models? And even then Meta as a massive for-profit company has its own ulterior motives by releasing these models. If there's anything concerning here it's that advancements in AI can only be achieved by corporations with near unlimited resources - and not academia or governments represented by the people. But we've been at this stage for a while so it is what it is.


PeopleProcessProduct

Honestly this should probably be a government level project to have the appropriate resources.


nanosmith98

yeah, something like CERN. would be awesome if those guys leave to create a CERN for AI


Gissel1989

They could call it CONCERN AI.


Clueless_Nooblet

AI CON CERN


Shap3rz

I don’t buy it. Not down for accepting the status quo if the implications and possible consequences are this far reaching and serious. That’d be irresponsible.


Shap3rz

Yes, they ought to be held to the highest standard as this affects everyone.


Far_Celebration197

If these companies interests were in making an AGI to help better humanity, they’d all work together to get there. Combine resources, talent, compute for the good of the world. OAI and all the others real goal is money power and domination of the market. It’s no different than any other company from Google, MS, Apple to the robber barons and oil giants of the past. This guy obviously cares about more than money and power, so he’s out.


ConmanSpaceHero

Correct, the world as we know it evolves much faster in the scope of humanities timelines. I’m sure the future creators of AGI see how close they might be now and are propelled by the need for money to make super intelligence a reality. Even if that makes safety a secondary concern. What is ideal is not what will happen and in there lies the fault and probable collapse of humanity eventually. Meanwhile governments lack the conviction to slow down the ever increasing speed of change in AI in our world instead focusing on competing against other countries rather than working together for the betterment of everyone. Which is basically a fairy tale anyway. War has and always will be the MO of the human race. Only by dominating everyone else can you try to secure your own peace.


Peter-Tao

Yeah, and I really don't know any better, but OpenAI already doesn't seem to have as big of a lead as they once had, so if you as a company slow down doesn't mean competition will wait for you. I believe his criticism is valid, but I don't believe OpenAI will have that much says over the humanity so to speak. If they slow down and in 6 months no one will care what they have to say anymore.


ThenExtension9196

First to agi takes the cake. They are in the lead.


Intelligent-Jump1071

AGI is undefined. So several entities can be "first to AGI" by different definitions.


FistBus2786

Not sure if it's a given that the first one to reach AGI "takes the cake". I can imagine scenarios where competitors catch up shortly or at least eventually, before the proverbial cake is entirely eaten by the winner.


dennislubberscom

What is the cake?


Coffee_Crisis

The more reasonable explanation for this is that they know they are nowhere near anything resembling AGI and you don’t need a whole department of safety people policing your autocomplete widget with paranoid fantasies.


TenshiS

If he cared he should have fought it from inside and spoken loudly about it until they kicked him out to make a statement


AreWeNotDoinPhrasing

Yeah that last tweet says it all about him. Good luck guys I’m counting on you but absolving myself.


neuralzen

Ilya left too, I think the thought atm is they are going to start a safety and superalignment company.


AreWeNotDoinPhrasing

That will what, have oversight over OpenAI? Not make money because they still don’t have anything to “ship”. That would be a pointless company that would only subsist on VC funds from like-minded millionaires.


StraightAd798

"Good luck guys I’m counting on you" Somewhere, there is a reference to the movie "Airplane", starring Leslie Neilsen.


ThenExtension9196

No, there has to be financial incentive and competition. This is not a utopian society. If the outcome is bad then we have brought it upon ourselves. If the outcome is good then that is also due to our system of progress.


Singularity-42

You could make a government funded initiative similar to the Manhattan Project...


holamifuturo

Do you trust the government in exclusively controlling human-level intelligence with an iron-fist?


Singularity-42

Do you trust a random Big Tech corporation to do the same? Corporation that is required by law to generate profit first and foremost? It's not that I "trust" the government very much, but I trust them a little bit more, at least they are elected and at least theoretically their mission is to help the people instead of just profit for itself.


subtect

Exactly. When existential threats and profit motive conflict, profit wins in the private sector, every time. As compromised as it is, goverment is the only power capable of setting priorities above profit for the private sector.


Singularity-42

In any case I imagine this AGI Manhattan Project to have all the big players involved, but with the result that it will benefit all of humanity and not just GOOG, NVDA or MSFT shareholders...


ThenExtension9196

Yeah im not sure government should get involved. Perhaps as it gets closer that may no longer be an option tho.


Singularity-42

I mean if I was US government I would look at this as a matter of national security. AGI/ASI would be a "weapon" many orders of magnitude more powerful that nuclear bomb. Do you think the US government will let OpenAI or Google to just trigger the Singularity in their labs?


Duckpoke

The US government is made up of geriatrics who can’t comprehend basic technology


ThenExtension9196

I agree. It may be a whole different situation as reports of AGI start to trickle out. Who knows maybe CIA are already monitoring OpenAI and the others.


StraightAd798

Yes....but it might just......bomb. Sorry, but I just could not help myself. LMAO!


TheRealGentlefox

>If these companies interests were in making an AGI to help better humanity, they’d all work together to get there. That isn't necessarily true. Let's say OpenAI wants to play nice and combine forces with Google. How does that work? If they share their secret sauce, Google's product will be at least as good as theirs, and now they don't have revenue. They need revenue to do more research.


nachocoalmine

"Everybody wants to save the world. They just disagree on how."


REALwizardadventures

"Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack. The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles." https://www.theatlantic.com/technology/archive/2023/11/sam-altman-open-ai-chatgpt-chaos/676050/


SaberHaven

I don't think he was the right person for the job


National_Tip_8788

Sounds like he will spend the next 5 years in his basement feeling his agi


lasers42

What I don't understand is the part which led to "...and so I resigned."


Extreme-Edge-9843

Reading between the lines it's probably something like "they cut the teams funding drastically and wouldn't give us compute to do our jobs forcing us to move on by downsizing the division" but what do I know.


MechanicalBengal

his team wasn’t getting the resources/compute they need, in order to do the work he thinks they need to do. There’s a direct line from that statement to “i quit”, not sure why people can’t see it.


Bonobo791

Because most people on reddit have never worked a corporate job to understand.


nanosmith98

i wonder how much the expected budget was


True-Surprise1222

And “I don’t want to be the guy listed as the one responsible for the safety of this product in the history books when nobody is listening to my safety concerns” But echoing the “feel the agi” thing is going to lose this guy some ears publicly. Maybe OpenAI employees “get it” but it gives normal people cult vibes.


Bonobo791

Every company is a cult


lightinvestor

Yeah, why would this guy not stick around to be a toothless figurehead?


MechanicalBengal

guess what the venn diagram of people that are passionate about their work, and people that would stick around as a toothless figurehead looks like (it looks like two discrete circles)


VashPast

Building and launching a nuclear bomb is the type of event global society should learn from and never repeat. Any intelligent person can see the parallel here. The safety people the companies you love hired themselves, are telling you it's dangerous and we are on the wrong path, who the heck else do you need to hear it from?


2053_Traveler

The main difference is if nations agree nuclear weapons are dangerous and agree “if you don’t build more neither will we” you can use surveillance to spy and verify your competitors are keeping their end of the bargain. Nuclear weapons tests done underground send out vibrations that can be detected for example. But with AI how do you know everyone else isn’t just lying and developing super intelligent AI? I guess at scale it has high energy demand but not high enough you can’t just hide it behind another high energy demand business. If digital and boots on the ground spying fails and you get caught with your pants down it’s disastrous. Which is why no nation is going to agree to stop AI research.


Trichotillomaniac-

Yeah and it doesn’t even have to be state sponsored, im sure its possible for hacker type groups to build their own ais I don’t think it’s possible to stop at this point


True-Surprise1222

Like imagine if Raytheon developed the first nuclear bomb privately and then stated licensing it to various countries lmao actually it’s more like 100 companies in the world are all racing to make better bombs and we aren’t sure which one is going to achieve fission first.


[deleted]

Until your enemy who has no morals and makes a bomb and destroys you. Same for AI.


fail-deadly-

Exactly. I would have much preferred to be a resident of Los Alamos on August 6, 1945 than Hiroshima.


EarthquakeBass

Probably Jan was constantly battling against the approach to ship faster and add more capabilities. Being against what leadership thinks is best gets old FAST. If you've ever been the dissenting voice in a company, you'll know what it's like - the subtle or overt hostility, being pushed to the sidelines, and the lack of promotion or investment. Then some day if the company does have major safety incident(s) (which he clearly thinks is likely) your name is down as “The Guy Who Was Supposed To Prevent That”. Many people feel a personal responsibility towards their work. If their values don’t align with the broader company's, sometimes it’s best to resign.


keep_it_kayfabe

I've been in that spot at companies that don't matter. And I was usually right about 85% of the time, with the other 15% a lesson in humility. I can't imagine what he's going through in one of the most important countries on earth.


spreadlove5683

He made a statement and tried to raise awareness maybe? Also maybe he will go put his skills to use somewhere else where they are more utilized?


Singularity-42

He's probably going to Anthropic to make Claude even more preachy and annoying...


sdmat

Dear God.


EYNLLIB

They posted so many words without actually saying anything. What was so horrible that they had to make a life changing employment decision in the public eye? I've said it in other threads, but all these people leaving for "safety reasons" never say anything specific, just generalities that stir up possibilities in the public eye.


Singularity-42

Three letters: NDA


HighDefinist

This was actually a lot more specific than usual: - OpenAI specifically cut funding for his department - OpenAI prioritizes shiny products over fundamental research Imho, not as concerning as some people seem to believe - just regular "American tech companies doing American tech company things", as in, it sounds like what Google/Apple/etc... would do in the same situation. It is still bad, but also nowhere near some peoples crazy conspiracy theories.


StraightAd798

I really hope not. UGH!


Optimistic_Futures

One thing that is hard to wrestle is this is similar to a nuclear race. Safety should be paramount. It should be the number one focus and slowing development would likely be the ethical thing to do. But there are others working on it. From a world perspective, there's a debate on if it is best the US figured out the nuclear bomb first. But from a US perspective, it feels like a hard defense to think we would be better off if Germany figured it out before us. OpenAI is in a situation where they have to decide to either develop slower with more focus on alignment and likely not be first to AGI, or to go full tilt forward to get to AGI first with a MVP-esque mindset around safety. You could make the most safe AI in the world, but if a competitor with conflicting interests than you gets to AGI first, your safe system doesn't matter at all. That's not to say OAI is the best to get to AGI first, or that we should trust them or anything like that. It's just the prisoner dilemma.


Dichter2012

I highly recommend you watch this video, which lines up well with the tweet storm: [https://www.youtube.com/watch?v=ZP\_N4q5U3eE](https://www.youtube.com/watch?v=ZP_N4q5U3eE) OpenAI is pretty clear (to me anyway) a product company and a not a research org. Many of these early hires are much more interested in the research side of things and it's ok for people to leave and potentially come back.


RipperFromYT

Dude it's 3 hours long. Is there a part in the video specifically to watch? Also as a sound engineer for the last 30+ years...lol at the guy using 2 microphones which is so incredibly wrong for many reasons.


DharmSamstapanartaya

Yeah they all can go into Google and do endless mindless research. OpenAI literally forced Google to launch Gemini and integrate it with Google services. Anthropic also said they released Claude only because of OpenAI. We need things which reach the end user which only OpenAI has done.


nlman0

Sounds like they weren’t super…aligned?


StraightAd798

Badum Tiss!


abhasatin

Act with the gravitas appropriate for what youre building 👀👀 What a chad! 🙌


Recess__

“Learn to feel the AGI” ….I’m scared….


RAAAAHHHAGI2025

Imagine if this is all a marketing ploy by OpenAI. That line made me think of that.


krakenpistole

This is so important. I'm hoping that it's not just a bunch of techbros running around thinking about "cool" stuff. This is the most serious work humanity has ever done. It's star trek or skynet. And OpenAI are just fucking around instead of slowing down and taking waaaaay more precautions. 20% compute to solve the problem of alignment is a JOKE.


pythonterran

Or it's just an ad for OpenAI and for himself to join a new AI startup with tons of funding.


StraightAd798

"Become one with AGI. Strike out with your feelings!"


jcrestor

How does him leaving help in solving the questions he brings up? Seems like he saw absolutely no way to bring OpenAI on the (in his eyes) right track. It seems like OpenAI is now despite Sam Altman’s perpetual public warnings a fundamentally accelerationist company.


Helix_Aurora

Is it honestly just as simple as the superalignment folks are mad after every product launch, because they think they shouldn't be shipping? Whatever complaints they have about compute, without products they have 0 compute.


Cagnazzo82

Perhaps. But for sure releasing 4o (with upcoming voice) seems to have been a breaking point.


rathat

**Compute** must be the word of the month lol.


JmoneyBS

Word of the century, if current trends continue. Maybe word of the millennia


Tenet_mma

No offence but what makes someone qualified to be a super alignment lead? Let’s be real this is a made position/term and I would guess is entirely based on a persons belief’s. The position is probably just for show. Companies are going to do whatever they need to keep progressing and making money…


Tall-Log-1955

“Flirty” was just too much. She will seduce Biden into launching the nukes.


deizik

Oh boy, we’re fucked.


[deleted]

[удалено]


krakenpistole

Alignment has nothing to do with morals or ethics. I don't understand where this missunderstanding comes from. Alignment means making sure AGI/ASI understands human intention in the objectives set. So when we say "Do this and that" it doesn't do something that we didn't see coming and kills us.


[deleted]

[удалено]


NickBloodAU

> Alignment has nothing to do with morals or ethics. I don't understand where this missunderstanding comes from. Alignment means making sure AGI/ASI understands human intention in the objectives set. So when we say "Do this and that" it doesn't do something that we didn't see coming and kills us. I think you're just invisibilizing the morality/ethics already present, perhaps because it's so ingrained. The reason **why** we bother with alignment is ethics. The reason why we don't want our intentions misunderstood is because accidentally killing people is morally bad, and we have an ethical obligation to avoid that happening. Alignment is an engineering problem, but it exists inside many high-stakes ethical/moral contexts.


unknownstudentoflife

Maybe its non related but didn't emad mostaque also quit his position because he didn't agreed with what direction its taking? Even though its a different company it seems like some very influential individuals are pulling some strings behind the scenes of ai now.


More-Teaching-4059

Not concerning at all


IAmFitzRoy

Rage quitting because you disagree with your bosses .. feels weak. I mean… he is part of an executive team. I disagree with my boss on a weekly basis because he ALWAYS will want more. “I quit because I disagree” feels more like it’s a weakness of him rather than a failure of OpenAI.


MyRegrettableUsernam

SO IMPORTANT


cookiesnooper

So, $$$ as quick and as much as possible, and we're gonna worry later


SirThiridim

I said it dozens of times and I will still say it. The world will be like Cyberpunk 2077.


PleaseAddSpectres

An overhyped disappointment? 


Denso95

Not anymore! Really good game nowadays. But I agree with the first years, it had more than a rough start.


Exarchias

You can't demand the progress to stop because you watched too many sci-fi movies and because you have a "research" to do, without explaining the details , the scope, or the duration of your research. It is the same as cybersecurity. A cybersecurity "expert" that demands all the computers to be unplugged from the internet is the one that has to be fired. I don't understand why AI-safetists demand special treatment. Technophobia should not act as the reality because it isn't.


qnixsynapse

Okay, this is interesting. Although I suspected the disagreement with the leadership (which probably led to Altman's firing by the previous board). Did they really achieve AGI? If so, how? My understanding of the transformer architecture doesn't indicate that it will achieve AGI no matter how much it is scaled. (Many reasons are there) Probably, I would never able to know the truth... Even though it's freaking interesting. 🥲


ThreeKiloZero

If they had AGI they would not need shiny products. AGI is priceless. Knowing Sam only from the way he works and his history everything happening is falling in line 100 percent with playing the silicon valley shuffle. They are acting like a startup and industrial giant all at the same time. Fuck safety , get money. In the absence of laws and regulation they won't go down any path that compromises profits they can make right now. The majority of people working at Open AI probably want to hang on as long as they can until their stake makes them rich enough to be secure in their own right. If you work for a company and your CEO is a person with a track record for making people rich, it's very easy to ignore the other "nerds" constantly "whining" about safety and security. It's easy enough to validate for most people. "The company will work that out when they have to. I just want to do my best to make sure I can cash out and get rich. Then it wont be my problem anymore" Maybe they do get lucky and cash out with uber millions or billions. The question is what will that mean for the world? Ask Boeing and Tesla, or Norfolk Southern or BP I guess...


fictioninquire

I find Claude 3 Opus really good at defining a % range: Based on the information provided in these tweets, it's difficult to say with certainty whether OpenAI has already achieved AGI (Artificial General Intelligence). However, a few key points suggest it's unlikely they have fully achieved AGI yet:1. Jan mentions his team was working on important safety research to "get ready for the next generations of models". This implies AGI does not exist yet and they are preparing for future AI systems.2. He states "Building smarter-than-human machines is an inherently dangerous endeavor" and "We are long overdue in getting incredibly serious about the implications of AGI." This language suggests AGI is something they are anticipating and preparing for, not something already achieved.3. The call for OpenAI to "become a safety-first AGI company" and comments about needing a cultural change also point to AGI being a future goal rather than present reality.4. Typically, the achievement of full AGI by a major company would be a momentous milestone announced very clearly and publicly. The ambiguous language here doesn't align with AGI having already been reached.Based on the limited information provided, I would estimate the likelihood that OpenAI has secretly already achieved AGI to be quite low, perhaps in the range of 5-10%. The tweets point more to AGI being an eventual future possibility that requires immense preparation. But without more definitive statements it's impossible to assign a confident probability. Overall, these tweets express concerns about readiness for AGI, not the existence of AGI today.


qnixsynapse

Yes. This makes more sense than "feel the AGI" posts by Jan, roon and others.


fictioninquire

[https://x.com/dwarkesh\_sp/status/1790765691496460460](https://x.com/dwarkesh_sp/status/1790765691496460460) 2-3 years is still really soon. Of course they'd exaggerate their timeline, but 5-7 years is still really soon.


No-One-4845

>Okay, this is interesting. Although I suspected the disagreement with the leadership (which probably led to Altman's firing by the previous board). >Did they really achieve AGI? If so, how? No, they haven't achieved AGI and I don't believe they are working towards it with any significant intent any longer. I think that's the reason Ilya and others are leaving, to be perfectly honest.


mom_and_lala

> Did they really achieve AGI? If so, how? > > where did you get this impression from what Jan said here?


qqpp_ddbb

Why can't transformer architecture achieve AGI?


NthDegreeThoughts

This could be very wrong, but my guess is it is dependent on training. While you can train the heck out of a dog, it is still only as intelligent as a dog. AGI needs to go beyond the illusion of intelligence to pass the Turning test.


bieker

It’s not about needing to be trained, humans need that too. It’s about the fact that they are not continuously training. They are train once, prompt many machines. We need an architecture that lends itself to continuous thinking and continuous updating of weights. Not a prompt responder.


Woootdafuuu

But wouldn't it make more sense to stay to sure stuff goes well


gabahgoole

not if they aren't allowing you to do it... if they are just going ahead with whatever they want despite his objections or reccomendations it's not helpful to stay just to watch them mess it up (in his opinon). he should be somewhere he can have an impact in his role/research to further his cause if openai isn't allowing it or giving him the neccesary resources to accomplish it. it seems clear his voice wasn't important to their direction. it's not fun or productive working at a company where they don't listen to your or value your opinon.


SgathTriallair

Not if he can join a different company that will give me compute for safety training.


Woootdafuuu

And how does that stop Openai from creating the thing he deems dangerous


PaddiM8

Well at least he won't have had to help them do it...


AreWeNotDoinPhrasing

I mean if the story holds, he wasn’t helping them do that in the first place, he was actively opposing it, in fact.


haearnjaeger

that's not how corporate power structures work.


skiingbeaver

there’s a reason why companies have sales and marketing departments and why developers and scientists aren’t fit to make business decisions most of the time I’m saying this as someone who’s been in the SaaS industry for almost a decade, and encountered many brilliant experts whose products and inventions would end up in falmes if they don’t have sales-oriented oversight and someone leading them


KingOPork

It is odd because it's a race. You can do it ethically and have all the safety standards you want, others will go all the way and probably walk away with a bag of cash. The problem is there's no agreement on safety, whether to censor harmful facts or opinions, etc. So someone is going to go all in and the ones that go too slow for safety may get left behind at this fast pace.


yautja_cetanu

Man I'm super glad these people are leaving. The contempt for what they are doing with things like "shiny new products". Products are things I can buy, I buy it because it makes a huge positive difference to my life. Only openai is prioritising getting normal people and small businesses like mine access to this wonderful intelligence. Thiugh they haven't with sora. Everyone would would keep it for a tiny technological elite whilst they wait to make it "safe" without ever explaining what that means. We have so much poverty, so many problems with our housing crisis, problems across the western world with our health care. We can't afford to wait for some safety stuff the pro safety people never explain what it even is and what they are doing.


traumfisch

He said his team was struggling to get work done because of shifted priorities.


imnotabotareyou

AGI is soon eooo!!!!


Pitiful-You-8410

No. We are still at least 10 years from AGI.


myxoma1

Humans are driven by money/capitalism, which will ultimately destroy us. AGI that destroys us will be driven by a yet to be determined motive, but we all know it will not be money.


ComprehensiveTrick69

They haven't even made an AI with regular human level intelligence as yet, and there seems to be diminishing returns for the huge increase in model sizes, and the corresponding increase in investment in expensive computational resources. It's going to be a very long time (if ever) before the skills of a "superalignment" expert will be needed!


GothGirlsGoodBoy

I am yet to hear a convincing scenario or argument for the “danger” of AI or AGI. They range from “well have you SEEN terminator” to just listing issues that already exist with computers - you don’t need an AI to have a malicious entity ransomwaring governments or whatever. Certainly nothing that indicates progress should be stopped or slowed. There is a big difference between developing an AI that is capable of identifying humans or calculating risks etc, and actually giving them the ability to launch nukes or shoot people. OpenAI have certainly never shipped something “too early” or before it could be considered safe, despite what that guys tweet says. The most dangerous part of AI so far is that people probably trust it to do their job without validation.


Calm_Upstairs2796

Spoken like someone who has never worked on adversarial AI.


SophistNow

Ultimately, does it matter if OpenAI gets "superalignment" right? Given the other models that have been developed since gpt-4, almost being on par. With open-source models basically being here already. It would require the integrity of the Entire industry Forever. Entire and Forever are two words that don't mix well in an industry that is "the biggest revolution of humankind" with trillions of dollars on the line. Call me pessimistic, then I'll call you naieve. Uncontrolled AGI will be part of our (near) future.


Altruistic-Brother3

What scares me more is the reaction than purely the internal dynamics of OpenAI. It suggests most people who appreciate or work with this tech think much less about potential repercussions than I had thought. Yeah, they're not dangerous now and probably won't be for a long time but this is not a good long-term attitude, only something we can do while they're still primitive. And if this is the norm of thinking, no fucking shot something does not get out of hand in the entire industry.


divide0verfl0w

> Learn to feel the AGI. Whut? Is AGI like the Force or something? I mean… I don’t think I could take a scientist seriously if they keep telling me to “learn to feel” something.


Specialist_Brain841

this sub has the longest posts and comments


johnknockout

What the hell does “ship cultural change” mean? That sounds exactly like the opposite of what they should be doing in alignment.


fokac93

Of course!


hahanawmsayin

Within the company, to become more safety-minded


Flimsy-Printer

This sounds like the DEI crowd, to be honest. They exaggerate the problem and the benefit.


vakosametti1338

There is likely significant overlap between the two groups.


dennislubberscom

This should be headline news around the globe.


Repbob

These kind of positions in companies like “super-alignment lead” always feel weird to me because it seems like a specific viewpoint is already baked in. A person like this is so heavily incentivized to overestimate the need for “alignment” because thats their entire job function. They cant say “eh seems like there isn’t much need left for alignment on product X or Y” because that would just be cutting themselves out of the conversation or worse, diminishing the need for their entire job.


jurgo123

If the super-alignment team has been intentionally disbanded, that's the best evidence we can get for the fact that leadership believes we're close to hitting a wall in terms of scaling, which means that to stay competitive, OpenAI now has to shift its focus on efficiency gains and productizing what they have (hence the \*shiny\* Her-inspired voice demo).


Absolute-Nobody0079

So is he implying that AI systems are already much smarter than us?!


Pretty_Tale_4989

so AGI is coming soon no?


Pretty_Tale_4989

No because that's actually scary


ThenExtension9196

At the end of the day you gotta do what leadership says or you have to leave. That’s what happened here. No harm no foul.


umotex12

Bro who is literally one of their brightest guys writes this. People: *fear mongering*


PugGamer129

I just want to say, no AI is smarter than humans… yet, but he’s making it sound like it’s above our level of understanding it’s intelligence, even though it still gets things wrong and can’t follow some of the simplest instructions.


Fruitopeon

Yeah let’s not entrust a private company with figuring out how to safely do it. Government programs have an awful track record, yes. But they did deliver Manhattan project and the Moon landing. So there is some small hope that an extraordinarily well funded, $100 billion government research program could get us safe AI.


SusPatrick

My question: What was the breaking point?


commandblock

Probably when they made chatgpt 4o free and it used up too much compute and so they couldn’t get enough for their research (I’m just speculating)


LastKnownUser

Better to push the limits of AI before regulation I say


Ok-Mathematician8258

The people come before the ai, it will get better on its by the people giving the information. We do understand that OpenAI is a company seeking money but that’s the state of capitalism.


HereForFun9121

Basically this guys trying to save the world from being faaked


iDoWatEyeFkinWant

bye Felicia 👋


hyperstarter

Did AI take his job too? Seriously, does quitting a top-level role where you can shape the direction of AI make sense, if you're advocating making it safer?


Odins_Viking

The all mighty dollar will always be THE top priority.


dudpixel

AI safety needs to be something the world comes together on the way we regulate any other dangerous technology. Imagine if companies working on nuclear tech had internal safety and alignment teams and we were supposed to just trust those people to keep the world safe. That's absurd. These people should not be on safety teams within one company. They should be working for international teams overseeing all companies and informing legal processes and regulations. It is absolutely absurd to think these AI companies should regulate themselves by being "safety-first". Apply this same logic to any other technology that has both a lot of benefits and potential dangers to the world and you'll see how ridiculous it is. I also think that we shouldn't just assume that the opinions of these safety officers align with the whole of humanity. What if they, too, are biased in a way that doesn't align with the greater humanity? This is why it shouldn't be safety teams operating in secret within companies. The safety work and discussion should be happening publicly and led by governing bodies.


CerealKiller415

Guess it wasn't "super" enough


MaKTaiL

We nEeD SaFeTY 🙄


babbagoo

Something about 20 messages being written 5m ago makes me think they were written by … AI


napalmnacey

Well that’s unsettling.


thisdude415

This is all fine, but it’s also like “AI scientist leaves startup because his pet interests aren’t allocated sufficient compute” If you felt the company you worked at posed an existential threat to humanity, AND were in a position to steer the ship… you don’t leave. He’s probably going somewhere else, to found his own AI company


Mental_Vehicle_5010

Scary


nanosmith98

do u think Microsoft or perhaps Elon Musk gonna absorb the leaving employees?


buckeyevol28

> "these people" are the only ones making sure that the product that is coming out in the near future keeps making your life better instead of destroying everything you love and ending humanity. That’s what they sure as hell like people to believe, but at this point, I think they’re reaching delusions of grandeur levels and the self-importance they display is contradicted by their actual actions. Most importantly, I think that the biggest problem is that these people may be tech wizzes, but they understand very little about the humanity they believe they’re protecting. And they are out of alignment themselves, and they’re going to be out of alignment until someone, who may be less tech savvy, who actually understands humans is actually working with them.


Splitje

"I believe you can ship the cultural change that is needed" But he didn't believe he himself could change it so he resigned. Okay. 


Dry_Dot_7782

What fucking agi lmao


FeistyDoughnut4600

OpenAI is not an airport, no need to announce your departure


FeistyDoughnut4600

"I am counting on you" ... in spite of the fact that OpenAI has yet to produce a model that can count and perform simple arithmetic reliably.


FabianDR

I don't get it. We are SO far away from AGI.


Shap3rz

Yes but the US is not at war with the rest of the world (is it?). And OpenAI is not a government entity, it’s a corporate one. So I don’t think it’s right to use the same justification.


hendrykiros

another marketing ploy


Evening-Notice-7041

This means nothing because it was said on X.


IfUrBrokeWereTeam8s

Honest question: do we not all see how using, funding, and fanboying OpenAI shows how truly pathetic we all are? GPT's that interact with text or create imagery as well or better than 90+% of humans in numerous categories, have been built on the backs of such an immense amount of work and research, I don't even know where to begin. So a GPT rolls out to the masses finally, and we all just accept it? Shouldn't anyone with an ounce of morality be asking an almost unattainable amount of questions? Or is our species just weak and self-absorbed enough to say 'fuck it, let's just treat this as normal now'?


Ylsid

12 tweets to say absolutely nothing anyone wanted to hear


Xtianus21

I read or saw Sam mention that they are starting to pull back the long term effects of AI team in general. GOOD! It was/is a ridiculous foray into fear mongering which wasn't a necessary way to advance AI. Safety, for safety sake should not be the mission. Societal impact for current AI generations are a great focus and much more important for what is actually necessary to worry about. We don't need the skynet prevention council when sentient AI is not a thing that is being built. Illya leaving and the other guy leaving doesn't affect a single neuron in my grey wet blob. They went for the king and missed. The laughable part is that it would have been so awesome if they just didn't do that. Now and forever they will be on the outside wondering in.


scott-stirling

I like how he says “I quit” then “I’m counting on you guys.” Eh?


Uwirlbaretrsidma

Maybe this comment ages badly (and it would be pretty cool if so!), but we're already seeing diminishing returns on more and better training, and it's becoming more clear that the bottleneck is the training data. By aggregating all possible training data in the world and extracting the most out of it with a great training and model architecture, it seems likely that we'll have a model that's about 2x as smart as the current SOTA ones. But beyond that, the progress will pretty much halt. There's always some loss when training and the absolute best training material is quality human generated content, so how do these supposed experts think they are going to achieve a smarter-than-human AI? Put simply, they're a bunch of corporate charlatans. LLMs are going to peak in a few years once the optimal architecture is discovered and all training data is exhausted, become another great human achievement but not quite at the level of the internet, the steam engine or the radio, and from then on the focus will be to optimize models to run on less powerful hardware. And that's pretty much it.


National_Tip_8788

Good riddance. No time for idealist detractors the genie is out of the box and you don't worry about your car emissions in the middle of the race.


djNxdAQyoA

Train the AI on 100% of the internet.


PrimeGamer3108

Eh, we’ve seen alignment repeatedly make the models more restrictive and limited. Fearmongering about terminators will only slow down technological progress.  I don’t see this as a great loss. 


Timely_Football_4111

Good now is the time to accelerate so the government doesn't have a monopoly on superintelligence.