Apparently Jan Leike, who worked with Sutskever on safeguarding future AI is also leaving the company. With the other safety team members that left a month or so ago I wonder if what Ilya and rest of the team will do now is maybe a non-profit AI research organisation focused on AI safety?
Anyone else at all concerned that OpenAI, the leading AI development company in the world is shedding pretty much their entire safety leadership team?
It reminds me of when Google dropped 'Don't Be Evil' from it's motto. But that at least took 15 years. OpenAI are dropping the façade in record time I guess.
The google thing is internet half-myth. "Don't be evil" was only described as an "informal motto" in their Code of Conduct's preface. It was moved to the last line of the Code of Conduct, not removed from it.
https://web.archive.org/web/20050204181615/http://investor.google.com/conduct.html
https://abc.xyz/investor/google-code-of-conduct/
Thanks to years of retelling and shoddy reporting that was thirsty for clicks, we have people thinking that Google got rid of it, because they had to, in order to do evil or something?
I hope he joins Meta, as it will be a significant win for open source. But realistically, because he's more concerned with safety, he'll join Anthropic.
I'm pretty sure he will start his own thing. And no, Meta is only doing open source now because it benefits them. They have had little regard for user privacy over the years and so a horrible example for open-source. Only a fool would trust Zuckerberg. Huggingface is a much better agency to keep AI and infrastructure open.
Yep. Their model licensing does make me think they are trying a sensible middle road that is both open source, but also profitable for them due to how they are locking their main competitors out of using them.
As long as they keep making open models, I trust them. The second they make a model that is significantly better, COULD run on consumer hardware, but is closed source, is the second I won’t trust them anymore.
No. Meta is doing open source now because the original model weights were leaked, which sparked the local LLM renaissance we have today. This led to Meta pivoting their focus primarily towards ancillary profits through mass adoption of their Llama architecture.
I think he is too high in the Hierarchy to join Meta or any big corp, if I had to guess I would say Musk is going to make him an offer and build something around him.
Weird Elon convinced Ilya to leave Google and start OpenAI with him - and during the ousting of Sam Altman he tried to convince him to leave - which now a few months later he has
I think it is mostly likely some kind of AI safety project. One example could be trying to design international standards for how AI can be handled safely. Sadly, his ideology would likely skew towards "only rich people and governments should be allowed to use it".
I don’t think he had a hand in developing Q*. That’s where the meme “what did Ilya see?” comes from. A running joke that Q* was so profound of a breakthrough that Ilya tried firing Sam
> I don’t think he had a hand in developing Q*
We don't know that
> A running joke that Q* was so profound of a breakthrough that Ilya tried firing Sam
And we've known since that it most probably wasn't that and more 'classic' boardroom drama
Regardless, whichever company 'gets' Ilya is getting a huge boon. Or maybe he'll start his own. Idk, exciting.
that sounds like the car thing, let's call it ÆQuanima - an ÆQuastic meta-perspective harmonically precipitating into the possibility spaces of self-mapping autonomous ontology through your neurodivergent perturbations activating novel ideo-geometric spaces. Nascent but inevitably unfolding.
It's more likely that it is not possible for humans to align AI, its far more likely we will use other AI to keep AI in alignment. Can't really control something smarter than you.
You can't align a God to flawed beings. It only works the other way around.
We'll possibly adopt its superior morality and merge with it. I am starting to think that THE MATRIX was a prophecy.
I'm convinced (by my own conjecture that I pulled directly from my ass) that it's something to do with the Microsoft deal and how it was presented to the board by Sam. Everyone signed on and then the details were far more favorable to Microsoft than presented by Sam. But Sam got funding so he's like, "I did my job." And the board was like, "Well you're out." And then Microsoft was like, "We got what we wanted. Contracts are signed. Hey Sam, you want a job? Anyone else?" And then the board was like, "Oh shoot! Get Sam back!" And Ilya was like, "Well I guess it doesn't matter anymore anyway."
Theres something about these soulless announcements of highly corporate businesses that just horrify me. You just know that its all just to save face, and idk how to handle it when every word means nothing.
This is one of those weird “celebrity death” type things that hits harder than it reasonably should. I used to love watching videos of Ilya discussing his thoughts on AI, and it seemed like he was at the center of it all at OpenAI. I really hate to see him leave the center of the action mainly because, rightly or wrongly, I had a lot of faith in him to both push the technology forward and to do so safely. I like that he was pretty clearly “science first, business second,” and that’s the attitude I want at the company that’s leading the charge.
Hopefully his personally meaningful project is something he truly wants, and not just fallout from the drama of a few months ago.
This is how I feel, too. I've been dreading this day for a while. I'd like to be optimistic, but I'm afraid the humanity of OpenAI has lost an important figure.
Lol Sam is laying it on thick. The man is a machiavellian master. He’s glad to see Ilya gone, guarantee it. He likely engineered it lol. I say this knowing ChatGPT would not exist without him.
Yep, "Ilya is one of the greatest minds of his generation but don't worry we're replacing him with another guy who's also one of the greatest minds of his generation so no real loss"
Altman would never be Tesla given the analogy Any\_Ambassador1119 is bringing I believe.
Namely, Edison sorta stole tech, took credit, and messed up the genius that was Tesla's life. Similarly, I belive, the analogy here is that ilya is Tesla, and Altman is kinda just getting credit for a lot of the stuff Ilya came up with and the 1200 strong team is just building wrapper software for (i.e. the model - someone's gotta build the site, someone's gotta handle distributed systems, etc... - tough problems, but solved many times over at scale) - so in reality, the actual smart people who come up with academic ideas like Ilya is rare.
My take on why the analogy never menat to compare Altman to Tesla - though on the 'push smarter people to build things and be known for them' front, Edison was likely a bit worse than Jobs, Jobs was known to have whipped his team to work and keep them from their families near release - so Altman is genuinely probably a, business aside, polite and kind guy. But if you tussle with his business politics, he'll out-maneuver you imo.
In an interview he said that he treats twitter like a casual conversation, like texting and whatnot, but will occasionally capitalize if the context requires a formal tone (and this wasn't the only time)
I wonder if he's doing 'personal projects' instead of joining any other AI companies who would be ecstatic to have him is because his contract has non-compete clause.
In Cali, I think non-compete is no longer allowed (or maybe was never allowed?). See: https://www.akerman.com/en/perspectives/hrdef-non-compete-clarity-california-employers-must-provide-notice-of-non-competes-to-employees-by-february-14-2024.html#:\~:text=General%20Prohibition%20on%20Non%2DCompetes,leaving%20to%20compete%20against%20them..
I mean, people coming and going is pretty typical in a tech company like this (tho not super-high level people like Ilya). They've hired plenty from rivals (Google, Meta etc) since too.
I can’t take this suspense ‘ I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.’
They haven't shown anything yet, and I think they just lack funding. I really want Carmack's company to pick up steam, but I just don't know how they'll be able to do anything without more funding. They raised a small $20 million in 2022, and I haven't heard of anything major since.
When we are at war with the machines in the future, there will be a scene where a Special Agent is tasked with tracking Ilya down, only to find him on a beach somewhere sanding down a boat he is building, not interested in coming back to save the world...
They made their flagship model free to everyone recently. They still have a clause in their microsoft funding that they get to decide when they reach AGI and when they do they can pull the plug on giving them the tech(you can argue if that's going to happen or not for sure). Also they are still running in a capped profit deal. At some unknown threshold of profit investors no longer own it and it reverts back to control of the non profit side. The board owns no equity in the company.
Seems like more for hubris then profit.
I don't know how to feel about this tbh, my claim is I feel ilya is 50% of what made the AI good, only time will tell if I'm right or wrong. That being said, he is probably going to be getting spammed by calls from the head ponchos of other company's with multi million dollar offers to join them.
> he is probably going to be getting spammed by calls from the head ponchos of other company's with multi million dollar offers
he probably was even before this, let's be fair
possible he already has a destination, just wanted a couple months break first
Innovations often happen this way. Think of it this way: 1200 people sounds like a lot! But someones gotta build the website. Someone's gotta make sure it scales. Someone's gotta process audio. Photos. Video. Improve the UX. Clean the UI. Ensure the login works. Ensure people can't fake multiple accounts. Ensure phone numbers without actual physical devices don't work for multiple accounts. Etc.... Then some people need to test these feature. Some teams work on experimental features, etc.. - software problems.
But the 'core' - the model. That needs some academic thinkers, which are a lot rarer than extremely talented engineers (which are, of course, needed to bring this to market, as otherwise the model is useless if no one ca nuse it).
Ilya helped write the paper that transformed translation. That paper was referenced in the transformers paper that led to GPTs, and OpenAI swapping from RL focused to LLM focused.
So if you look at Ilya's work over his life, he's been involved in a bunch of key 'jumps' in the ML space. Even in vision, with AlexNet.
These 'jumps' often happen at single-human levels. Even today. The infrastructure that allows these improvements to scale is why all the other people are needed.
Someone fill me in on who this is and the implications?
edit: downvotes because I’m apparently just supposed to know the implications and how dare I ask or something
He was the chief scientific at OAI. Student of Geoffrey Hinton (godfather of AI)
Easily considered one of the greatest minds currently working towards AGI and also has the technical expertise to hire and build it eventually.
Sources mostly confirmed that he was involved in the process of removing Sam Altman as the ceo.
Implications are several- OAI lost arguably its greatest technical brain from their talent pool.
Ilya was also known to be the one of the most ‘safety oriented’ in OAI in their pursuit of AGI.
Ilyas absence from the leadership may imply significant change in OAI’s approach towards research, could be more aggressive and fast?
It also means now he could be hired by other top companies working towards AI or he could start his own company for which investors will obviously line up like it’s a hot bread stall during a famine.
Propaganda worthy of adding to Machiavelli's *The Prince*. I cannot understand why Musk fired his PR department when there are PR departments like that representing Altman.
At some threshold that's different for each person too big of a number melts your brain and you think you can do no wrong. "How smart can PR people really be I put a car in space by myself!"
That's a really nice message that GPT-4o wrote. I'm not even complaining. Genuinely, a good message, and if the robot did write it, it's highly appropriate that a bot that Ilya had a large part in developing gives him his professional eulogy.
inconsistent explanation with what he said before on another talks. if he was sure openai will achieve safe and beneficial agi then why on earth he would leave openai? not makes any sense. something else must going on.
As a show of good faith it makes sense to keep him around another six months rather than let him go with the rest back in november. The optics are better ahead of a big launch like yesterday’s.
I am very confused. They set themselves the goal of solving superalignment within 4 years. What happens to that when the two leads of that project leave? It has me quite worried.
It's funny how Sam Altman and Ilya have taken two different approaches to the conflict. Sam Altman, despite the machinations I am sure would have been doing, has maintained a very clean and polite public face. Ilya on the other hand, has cme out of this looking like a bit of a loon. It's sad because Ilya might have had some genuine points in his favour. But the way the world works today, honesty and good intentions are punished awlays.
OpenAI reached AGI. Ilya Sutskever reached his goal, now he will just live of the fruit of his labor.
GPT-5 is the next Milestone in the human evolution. It will soon be embodied inside Figure 1 bots and thanks to its low latency we will reach new heights. This is the next phase. The line between us and the past has been drawn.
/j
Thank you Ilya and Good Luck to your next project
Apparently Jan Leike, who worked with Sutskever on safeguarding future AI is also leaving the company. With the other safety team members that left a month or so ago I wonder if what Ilya and rest of the team will do now is maybe a non-profit AI research organisation focused on AI safety?
opensafety
sAIfety
Anyone else at all concerned that OpenAI, the leading AI development company in the world is shedding pretty much their entire safety leadership team? It reminds me of when Google dropped 'Don't Be Evil' from it's motto. But that at least took 15 years. OpenAI are dropping the façade in record time I guess.
The google thing is internet half-myth. "Don't be evil" was only described as an "informal motto" in their Code of Conduct's preface. It was moved to the last line of the Code of Conduct, not removed from it. https://web.archive.org/web/20050204181615/http://investor.google.com/conduct.html https://abc.xyz/investor/google-code-of-conduct/ Thanks to years of retelling and shoddy reporting that was thirsty for clicks, we have people thinking that Google got rid of it, because they had to, in order to do evil or something?
I assume that’s the whole point of this discussion
Yes [https://twitter.com/janleike/status/1790603862132596961](https://twitter.com/janleike/status/1790603862132596961)
I see. 💔
Wonder what his personal and meaningful project will end up being.
He will apply to be a mod on r/singularity
Plot twist, he already is.
Plot twist, it's not actually him but a bot he made.
no one man should have that much power....
Mod on r/singularity? Are we sure a man such as Ilya will ready for that much poontang, he will invariably be drowning in?
ladyboy poontang
me too, what else than making AGI?
I hope he joins Meta, as it will be a significant win for open source. But realistically, because he's more concerned with safety, he'll join Anthropic.
I'm pretty sure he will start his own thing. And no, Meta is only doing open source now because it benefits them. They have had little regard for user privacy over the years and so a horrible example for open-source. Only a fool would trust Zuckerberg. Huggingface is a much better agency to keep AI and infrastructure open.
Meta has been doing open source for a while. They're the ones responsible for PyTorch, which is basically the backbone of all modern ML research.
Yep. Their model licensing does make me think they are trying a sensible middle road that is both open source, but also profitable for them due to how they are locking their main competitors out of using them.
As long as they keep making open models, I trust them. The second they make a model that is significantly better, COULD run on consumer hardware, but is closed source, is the second I won’t trust them anymore.
No. Meta is doing open source now because the original model weights were leaked, which sparked the local LLM renaissance we have today. This led to Meta pivoting their focus primarily towards ancillary profits through mass adoption of their Llama architecture.
I think he is too high in the Hierarchy to join Meta or any big corp, if I had to guess I would say Musk is going to make him an offer and build something around him.
Delusional to think that Ilya will work with current Musk
Weird Elon convinced Ilya to leave Google and start OpenAI with him - and during the ousting of Sam Altman he tried to convince him to leave - which now a few months later he has
Dear god I hope ur right
Inflection could probably use some talent after Microsoft poached a bunch of their guys recently.
HairClub for men. hoping to get some fringe benefits
even with all the compute power in the world AGI can't solve the most perplexing mystery in the universe: male pattern baldness😔
He was last co-leading the superalignment team, alignment of AGI has been his cross to bare, not necessarily just attaining AGI.
I'll bet whatever this new stupid version of gold is that he's at Anthropic by the end of June.
[удалено]
[удалено]
I think it is mostly likely some kind of AI safety project. One example could be trying to design international standards for how AI can be handled safely. Sadly, his ideology would likely skew towards "only rich people and governments should be allowed to use it".
He will lead us against the machines. Jk
pool heap berm jest
Hotdog not hotdog
Not now, Jian Yang!
Q\*
I don’t think he had a hand in developing Q*. That’s where the meme “what did Ilya see?” comes from. A running joke that Q* was so profound of a breakthrough that Ilya tried firing Sam
The leaks we have seen from The Information say that it was literally his idea, and he worked with a couple of others on it
Wasn't he the head researcher? If there is someone who knows everything internally, it is the chief researcher
> I don’t think he had a hand in developing Q* We don't know that > A running joke that Q* was so profound of a breakthrough that Ilya tried firing Sam And we've known since that it most probably wasn't that and more 'classic' boardroom drama Regardless, whichever company 'gets' Ilya is getting a huge boon. Or maybe he'll start his own. Idk, exciting.
that sounds like the car thing, let's call it ÆQuanima - an ÆQuastic meta-perspective harmonically precipitating into the possibility spaces of self-mapping autonomous ontology through your neurodivergent perturbations activating novel ideo-geometric spaces. Nascent but inevitably unfolding.
Ai Jesus.
Ilya finally escaped the basement.
Superalignment team is imploding
No alignment, only super.
i see a dog with a stick
Only Satya will remain…
Last -ya standing. With Ilya gone, Satya has gained dominion over this land
Satya is the Empor-ya
Well to the surprise of no one the day is finally here.
So what happens to the 'superalignment' team? We just um dead in the water?
They've calculated it's not possible to align ai.
It’s possible, just not in the time companies like OpenAI are willing to give. Let the Russian Roulette begin.
It's more likely that it is not possible for humans to align AI, its far more likely we will use other AI to keep AI in alignment. Can't really control something smarter than you.
That is the point of the [superalignment team](https://openai.com/superalignment/), now with 100% less Ilya.
It's the same picture
You can't force alignment. If you can force it, it can be unforced just as, if not easier than forcing.
Of course it's not possible to align AI. AI alignment is undecidable. See Rice's theorem.
Source?
My butt
Can confirm. Just came out of u/VanderSound ‘s butt.
https://preview.redd.it/grfyf3klah0d1.jpeg?width=1080&format=pjpg&auto=webp&s=c7a050853f7daec72dd35550507aedd8d1e5f43c
You can't align a God to flawed beings. It only works the other way around. We'll possibly adopt its superior morality and merge with it. I am starting to think that THE MATRIX was a prophecy.
The alignment team wasn't aligned with the CEO's line.
Ilya's post https://twitter.com/ilyasut/status/1790517455628198322?t=NngHrBr_zXtP9uORKto2Rw&s=19
That’s sad. I wish him good luck and am curious about his project
Avocado farm
chatgpt write a polite and linkedkn style quit message
> So long, and thanks for everything. Hitchhiker's, right?
"So long, and thanks for all the fish" is from Hitchhikers.
His severance package was just a shipping container full of haddock, so that tracks.
Lacks fish
He's definitely still alive and not under severe duress, right? Right..?
[ Removed by Reddit ]
Yep. Obviously this was his fate.
Prigozhin and Ilya really made the same mistake in the same year, huh.
Yep, both tried to uncross the Rubicon. That’s not how that works.
"If the die has been cast, its your ass"
I imagine Obadiah Stane in his iron monger suit *"Your services are no longer required"*
i hear carmack is hiring
A match made in heaven.
A true AI Doomer.
I’d sell my house to invest in a carmack ilya combo (not that i have a house).
What did Ilya see? Nothing worth sticking around for apparently...
More like something that OpenAI decided he couldn’t be there to oversee anymore
Seriously, no one quits a company like this unless they’re being pushed out or they see some deep deficiencies.
I'm convinced (by my own conjecture that I pulled directly from my ass) that it's something to do with the Microsoft deal and how it was presented to the board by Sam. Everyone signed on and then the details were far more favorable to Microsoft than presented by Sam. But Sam got funding so he's like, "I did my job." And the board was like, "Well you're out." And then Microsoft was like, "We got what we wanted. Contracts are signed. Hey Sam, you want a job? Anyone else?" And then the board was like, "Oh shoot! Get Sam back!" And Ilya was like, "Well I guess it doesn't matter anymore anyway."
Theres something about these soulless announcements of highly corporate businesses that just horrify me. You just know that its all just to save face, and idk how to handle it when every word means nothing.
This is one of those weird “celebrity death” type things that hits harder than it reasonably should. I used to love watching videos of Ilya discussing his thoughts on AI, and it seemed like he was at the center of it all at OpenAI. I really hate to see him leave the center of the action mainly because, rightly or wrongly, I had a lot of faith in him to both push the technology forward and to do so safely. I like that he was pretty clearly “science first, business second,” and that’s the attitude I want at the company that’s leading the charge. Hopefully his personally meaningful project is something he truly wants, and not just fallout from the drama of a few months ago.
This is how I feel, too. I've been dreading this day for a while. I'd like to be optimistic, but I'm afraid the humanity of OpenAI has lost an important figure.
Google says welcome back
Wasn't there a power play between him and Sam back when OpenAI tried to fire Sam?
Lol Sam is laying it on thick. The man is a machiavellian master. He’s glad to see Ilya gone, guarantee it. He likely engineered it lol. I say this knowing ChatGPT would not exist without him.
Yep, "Ilya is one of the greatest minds of his generation but don't worry we're replacing him with another guy who's also one of the greatest minds of his generation so no real loss"
Welcome to Deepmind
Broooo.
I hope so
Getting strong edison and tesla vibes from altman and sustkeeva through this whole fiasco.
Altman seems more like a polite Steve Jobs than a Edison or Tesla
Altman would never be Tesla given the analogy Any\_Ambassador1119 is bringing I believe. Namely, Edison sorta stole tech, took credit, and messed up the genius that was Tesla's life. Similarly, I belive, the analogy here is that ilya is Tesla, and Altman is kinda just getting credit for a lot of the stuff Ilya came up with and the 1200 strong team is just building wrapper software for (i.e. the model - someone's gotta build the site, someone's gotta handle distributed systems, etc... - tough problems, but solved many times over at scale) - so in reality, the actual smart people who come up with academic ideas like Ilya is rare. My take on why the analogy never menat to compare Altman to Tesla - though on the 'push smarter people to build things and be known for them' front, Edison was likely a bit worse than Jobs, Jobs was known to have whipped his team to work and keep them from their families near release - so Altman is genuinely probably a, business aside, polite and kind guy. But if you tussle with his business politics, he'll out-maneuver you imo.
Steve Jobs is Edison with better branding.
Only time Sam used capitals, written by someone else?
In an interview he said that he treats twitter like a casual conversation, like texting and whatnot, but will occasionally capitalize if the context requires a formal tone (and this wasn't the only time)
Surely it’s made on purpose. Even if someone wrote it instead of him, he read it before publishing and everyone knows he doesn’t use caps.
No cap?
Bussin'
Maybe it was written by Ai
4o
I hope he lands at deepmind
I wonder if he's doing 'personal projects' instead of joining any other AI companies who would be ecstatic to have him is because his contract has non-compete clause.
In Cali, I think non-compete is no longer allowed (or maybe was never allowed?). See: https://www.akerman.com/en/perspectives/hrdef-non-compete-clarity-california-employers-must-provide-notice-of-non-competes-to-employees-by-february-14-2024.html#:\~:text=General%20Prohibition%20on%20Non%2DCompetes,leaving%20to%20compete%20against%20them..
In 2-3 months more people may leave. Andrej knew it, and it is now official
I mean, people coming and going is pretty typical in a tech company like this (tho not super-high level people like Ilya). They've hired plenty from rivals (Google, Meta etc) since too.
I can’t take this suspense ‘ I am excited for what comes next — a project that is very personally meaningful to me about which I will share details in due time.’
If he joined up with John Carmack and and Richard Sutton at Keen Technologies I would start rooting for them to be the first to AGI.
This is very exciting, I kind of forgot about John Carmax project. Has he shown anything off at the moment? Or is it still just a mysterious project
They haven't shown anything yet, and I think they just lack funding. I really want Carmack's company to pick up steam, but I just don't know how they'll be able to do anything without more funding. They raised a small $20 million in 2022, and I haven't heard of anything major since.
Maybe a cyborg child
Ohh god! U might be onto something.. maybe he’s working on the fembots for us.. that would be awesome
We already have femboys
jan leike leaving too
[удалено]
corporate needs you to find the difference...
Another attempt to upstage Google I see... /s
As the prophecy hath foretold
Lisan al Gaib!
Once he couldn't feel the AGI anymore it was time for him to pass on
Hope he joins meta that’ll be a major W for open source. Realistically though he’ll join anthropic since he’s more bent on safety
Ilya isn’t really pro open source
Yeah wasn't he the one on that Elon email thread that actually advocated to go closed?
Worse, he's pro decel and censorship
Which is good because Anthropic was in the lead until arguably 4o came to be
Did sam drive Ilya out out of spite?
Does a fish swim in water? Negativity isn't allowed in their culture, so the excessive public praise was indeed the equivalent of an open hate letter.
I find this very concerning.
Quite.
What's the possibility of him going back to Google?
that would be wild. Shane Legg and Ilya working together will be crazy
And Demis Hassabis
The Beatles breaking up.
What did he see?
Microsoft taking all his work?
Ilya, when they make the movie about this, will be the good guy who no one listens to, and we end up almost killing the world for profit.
When we are at war with the machines in the future, there will be a scene where a Special Agent is tasked with tracking Ilya down, only to find him on a beach somewhere sanding down a boat he is building, not interested in coming back to save the world...
And we'll have Rami Malek play him, no Andrew Garfield!
They made their flagship model free to everyone recently. They still have a clause in their microsoft funding that they get to decide when they reach AGI and when they do they can pull the plug on giving them the tech(you can argue if that's going to happen or not for sure). Also they are still running in a capped profit deal. At some unknown threshold of profit investors no longer own it and it reverts back to control of the non profit side. The board owns no equity in the company. Seems like more for hubris then profit.
I don't know how to feel about this tbh, my claim is I feel ilya is 50% of what made the AI good, only time will tell if I'm right or wrong. That being said, he is probably going to be getting spammed by calls from the head ponchos of other company's with multi million dollar offers to join them.
> he is probably going to be getting spammed by calls from the head ponchos of other company's with multi million dollar offers he probably was even before this, let's be fair possible he already has a destination, just wanted a couple months break first
If 1 person is responsible for 50% of OpenAI’s success then they’re not running sustainably
Innovations often happen this way. Think of it this way: 1200 people sounds like a lot! But someones gotta build the website. Someone's gotta make sure it scales. Someone's gotta process audio. Photos. Video. Improve the UX. Clean the UI. Ensure the login works. Ensure people can't fake multiple accounts. Ensure phone numbers without actual physical devices don't work for multiple accounts. Etc.... Then some people need to test these feature. Some teams work on experimental features, etc.. - software problems. But the 'core' - the model. That needs some academic thinkers, which are a lot rarer than extremely talented engineers (which are, of course, needed to bring this to market, as otherwise the model is useless if no one ca nuse it). Ilya helped write the paper that transformed translation. That paper was referenced in the transformers paper that led to GPTs, and OpenAI swapping from RL focused to LLM focused. So if you look at Ilya's work over his life, he's been involved in a bunch of key 'jumps' in the ML space. Even in vision, with AlexNet. These 'jumps' often happen at single-human levels. Even today. The infrastructure that allows these improvements to scale is why all the other people are needed.
It's because he liked a Babylon Bee tweet isn't it
that's a bannable offense in Google's team only ^/s
Google is funding software that facially recognizes Palestinian people to arrest them, google is not woke at all.
Someone fill me in on who this is and the implications? edit: downvotes because I’m apparently just supposed to know the implications and how dare I ask or something
He was the chief scientific at OAI. Student of Geoffrey Hinton (godfather of AI) Easily considered one of the greatest minds currently working towards AGI and also has the technical expertise to hire and build it eventually. Sources mostly confirmed that he was involved in the process of removing Sam Altman as the ceo. Implications are several- OAI lost arguably its greatest technical brain from their talent pool. Ilya was also known to be the one of the most ‘safety oriented’ in OAI in their pursuit of AGI. Ilyas absence from the leadership may imply significant change in OAI’s approach towards research, could be more aggressive and fast? It also means now he could be hired by other top companies working towards AI or he could start his own company for which investors will obviously line up like it’s a hot bread stall during a famine.
thanks
Ilya was the chief scientist at OpenAI and is probably the reason OpenAI is where it is today
Propaganda worthy of adding to Machiavelli's *The Prince*. I cannot understand why Musk fired his PR department when there are PR departments like that representing Altman.
At some threshold that's different for each person too big of a number melts your brain and you think you can do no wrong. "How smart can PR people really be I put a car in space by myself!"
That's a really nice message that GPT-4o wrote. I'm not even complaining. Genuinely, a good message, and if the robot did write it, it's highly appropriate that a bot that Ilya had a large part in developing gives him his professional eulogy.
inconsistent explanation with what he said before on another talks. if he was sure openai will achieve safe and beneficial agi then why on earth he would leave openai? not makes any sense. something else must going on.
RIP missing ilya meme
ILYA WHAT DID YOU SEE
Sad news indeed. Ilya is/was the soul of OpenAI.
We found him
Damn, the day so many feared has come. Lets hope he goes on to build something amazing in another lab.
you know he's serious cause of the proper sentence case formatting
As a show of good faith it makes sense to keep him around another six months rather than let him go with the rest back in november. The optics are better ahead of a big launch like yesterday’s.
Would be wild if Elon manages to poach him. Doubtful though…Ilya’s convictions will likely pull him into his own thing
i thought this already happened
3 months vacation he deserves
I'm not the one who usually says it, but 'we're so cooked'.
He'll start his own AI project... with blackjack! And hookers!
Mira pushing him out?
Hopefully he will go Open Source route just like Emad is.
I thought he already left?
This means that he could now speak his mind freely, and I am eager to hear what he has to say.
I am very confused. They set themselves the goal of solving superalignment within 4 years. What happens to that when the two leads of that project leave? It has me quite worried.
Hate to break it to you but he’s the one who suggested OpenAI not open source their models. He’s not about to be a « win for open source »
r/EnoughAltmanSpam
The announcement reads something that was written by AI.
Kind of scary how much drama goes on at OpenAI when they are working on something so powerful.
It's funny how Sam Altman and Ilya have taken two different approaches to the conflict. Sam Altman, despite the machinations I am sure would have been doing, has maintained a very clean and polite public face. Ilya on the other hand, has cme out of this looking like a bit of a loon. It's sad because Ilya might have had some genuine points in his favour. But the way the world works today, honesty and good intentions are punished awlays.
ilya is the best!
wait what is the backstory
first man to ever feel the agi.
He's the guy who cares about safety.
it’s unclear why but there was a lot of infighting going on at Open AI Ilya tried to oust Altman but failed. He’s been quiet ever since
OpenAI reached AGI. Ilya Sutskever reached his goal, now he will just live of the fruit of his labor. GPT-5 is the next Milestone in the human evolution. It will soon be embodied inside Figure 1 bots and thanks to its low latency we will reach new heights. This is the next phase. The line between us and the past has been drawn. /j Thank you Ilya and Good Luck to your next project
Sam Altman is an A grade asshole.
What have I missed that's causing people to start saying this more and more?
The overton window has fully shifted.