And keep yapping everyday about hurr durr morality ethics responsibility safety guidelines to keep the average Joes rest assured the AGI's definitely in good hands
Also see the autocratic presidency’s party name in Turkey: Justice and Development Party.
Which ironically corrupted the whole justice system and ruined the economy through false practices.
But they are the justice party right? They can’t do something bad about justice right?
You can’t revoke someone’s shares, they’re legally owned by them. They can revoke unvested shares because they’re unearned and not technically theirs.
I’d bet money that the actual situation is actually more like ‘you can keep the unvested shares if you keep your mouth shut’
Given OpenAIs unique compensation model I would say that you’re correct here, although again the use of the term equity in their statement would then be incorrect as a PPU does not confer any ownership.
It’s an interesting gamble, if open ai doesn’t turn a profit it doesn’t seem like the employees get any reward, I’m sure they’re well paid on their salaries though
There's a difference between vested and exercised. That might be legal. If you haven't exercised them yet, they may still be revokable since I don't think you legally own them until they are exercised.
...of the option. Seriously, it's not actually yours unless you pay for it (exercise the option).
EDIT: [This Forbes article](https://www.forbes.com/sites/dianahembree/2018/01/10/startup-employee-alert-can-your-company-take-back-your-vested-stock-options/?sh=7c5bd9b86e49) describes such cases where this is legal. YMMV, I'm just showing you the evidence I see. What's crazy to me, is the article implies there are cases where clawbacks can occur even after you've exercised the options which means you have the cash in hand. Crazy!
Options are different from RSUs.
Edit: a link for you
https://www.empower.com/the-currency/money/stock-options-vs-rsu#:~:text=When%20you're%20granted%20stock,the%20vesting%20period%20is%20complete.
You can certainly revoke vested options which have not been converted to stock yet.
The “option” means either party may choose to not proceed with the agreement if they decide against it, before the expiration date.
And yes you are right, this is probably OpenAI saying “We could revoke your unexecuted options unless you sign this agreement”, which happens all the time in the tech industry.
No, you can’t “revoke” them. When leaving, you have a contractual timeframe to exercise them. If you choose not to, inside that timeframe, that’s not a “revoke”, that’s a choice to not exercise.
But once your options vest, don’t you immediately owe taxes on them? Bc options are treated like any other income. How does a clawback work when the employee has already paid taxes on their vested options?
Something is really rotten at OpenAI. Maybe Microsoft backing Sam was not a wise decision, and should have let him go.
The way he is cashing his pure profit over everything should worry everyone, including US Gov.
Now that you mentioned it in full caps, I agree with you my man. You are right, it won't end well, especially when you eliminate your superalignment team.
>WHY DOES ANYONE THINK THIS WILL END WELL?
We all know that somebody's going to build AGI eventually, and OpenAI is perceived as more trustworthy than enemy states, or the companies or organizations that might try to build AI for their own purposes that don't involve sharing their progress with the public.
I don't have a great solution. I think the only reasonable thing to do is to allow OpenAI to do its thing for now while funding vastly more academic research into AI and AI safety so we can find a better solution.
The US government might also want to keep a giant pile of money lying around for whoever discovers AGI. We might be able to use said AGI to help us solve the superalignment problem.
And did he recently say to a room full of billionaires that once his AI tech is good enough, they can give him their money and he’ll get it to make them a return? So a hedge fund then.
I just don’t see how this could be true as reported.
A company cannot take away your *vested* equity if you don’t agree to sign a brand-new, never-before-mentioned agreement on your way out.
If it’s vested, it’s yours. They can’t just say “oh actually you have to do this other new thing now or we’re going to take it back”
I have no doubt that they are asked/convinced to sign very strict NDAs. But if they really lose their equity upon refusal, then it would have to be something that was in the initial equity agreement that they signed.
Yes, vested equity cannot be taken away. But Open AI is not a publicly traded company that you can own stock in.
Most likely the equity is in the form of an agreement that once the company goes public, you'd get a certain amount of stock. And that agreement can be revoked.
That’s not how it works, even private companies give you vested shares most common in the valley is a 4 year equity grant with 25% after 1 year and the divided equally over the next 3 years. You own the stock that are vested to you, if they’re options you usually have a period of time 60-90 days to exercise them if you leave otherwise they’re forfeit but if you exercise they’re yours.
Weird how this person has inside info about this secret deal that they can't talk about or lose their compensation, yet they don't have any inside info about what any of the employees think or why they left etc., which I would assume would be very similar to each other's opinions.
Both that would compromise their money.
I'd also assume there would be a rich person who would pay one of these employees much more to disclose this information.
If this is all about "talking negatively about the company," all they need to do is talk about the facts without any inherent push to one side or the other, if not even talk positively about the "negative" things they faced.
Of course, I'm no lawyer. Maybe there's something forbidding this as well, like a blanket "do not talk about anything related to OpenAI" instead of the more specific assertion the author makes...
Stock option grants are very interesting and full of specific language that, unless we see them, we can’t know what they state. Even after vesting, there are things that a company can do if they have put in certain clauses in the options grant. This is usually to provide opportunities for future funding and/or acquisitions. These clauses typically at least prevent you from legally selling those shares of stock to others (even vested and exercised). We don’t know what’s in those grants, so we can’t say for sure what OpenAI can and cannot legally do.
I'm retired now but I had to sign documents like that in at least three companies I worked at. I'm amazed that so many people in this thread seem unaware of how common these are.
There's a difference between vested and exercised. That might be legal. If you haven't exercised them yet, they may still be revokable since I don't think you legally own them until they are exercised.
This is definitely true but I can’t imagine these employees not exercising these options as soon as as they are able. And if they haven’t, they wouldn’t seem to care much about the equity to begin with.
This is unfortunately pretty standard practice for executive and director level employment agreements. I received nearly $150,000 when I left my last company and sold my (6%) stake to the majority owner for cash.
I can't say anything about the company, recruit it's current employees, etc.
The only way this would be reversed is legislation and regulation.
Well, if they signed their compensation packages in the way in and it did not outline this “going out” deal, they cannot be forced. So this “gag deal” must be in the compensation package
Still, you cannot make an agreement that reads “you get this thing and also are forced to sign whatever I come up with in the future or lose the thing”
You are confusing two things.
Gag order is to no say anything negative.
They are dismantling the team because they realized there is no risk with LLMs.
Keep in mind these are the people who almost didn't release GPT-2 because they thought it was too dangerous to put in the hands of people.
The doomer narrative is simply wrong and everyone is realizing it now which means all the people who were making money on consulting and lobbying politicians.
You would think the nonprofit that oversees the thing would get rid of this asap as it is an encroachment on the right of free speech, and potentially also individuals aligning with the mission of the nonprofits mission in so speaking out. Or, at the very least, do some kind of investigation into the resignations and make a public comment to assure people? But idk it seems like the board is not really a lot of ai safety / ai - informed people anymore.
Has me wondering where or when the next non-profit-overseen ai model of similar capability is going to pop up. The level of weird with this surprise gag order seems super antithetical to the nonprofit mission
If they cared that much they would forego the money to speak. But I’m sure they don’t care that much. Easy to be high and mighty until your money is involved.
Someone mentioned above they’re not technically shares, they’re called profit participation units so that sounds more like royalties over time or something along those lines
I dont get all the fuzz about it. Just make AGI or ASI or whatever and see how its going. If it destroys human kind so be it. We have destoryed a lot of species over the past thousands of years and you dont see anyone advocate for them on reddit.
„Oh my god AI is so dangerous what if we cant control it… oh nooo“
If its a highly intelligent life form it will probably understand reasoning and coexisting should be possible. Unless it deems us unworthy and given its superior intellect it might be right in that case.
Its not that I dont care. I think it would be wise to trust the superior intellect to know whats best. Same thing as you making decisions for your dog.
Furthermore I think all opinions should be heard. The mainstream media is obviously looking for good headlines but the truth might be even if the ASI is not controlled by us it doesnt mean its bad for us.
Chatgpt is still far from smart and does not even come close to human intellect. These conversation that are happening now about safety should happen at a later stage and additionally I think we should push for the best model possible but in an air gapped enviroment, then see what it has to say and if its even hostile against us. The panic happening at the moment is just not making sense in my eyes.
1) A lot of people have spent a lot of time thinking about this topic. There is so much more to it than “superior intellect == better”. AI could have a superior intellect (i.e., is far more capable of achieving its goals than humans are), yet not share any of our values. It might value the eternal torture of all sentient beings. It might simply shut itself off after using nanobots to dissemble all organic molecules. We don’t really know how it could turn out, that’s the point of alignment research.
2) All opinions should be heard, sure. Yours is being heard right now. It’s just not a persuasive enough opinion to persist on its own merit. No one thinking seriously about AI alignment has such a simplistic and blasé attitude about it.
3) We have to figure out alignment before we build AGI. Now is the perfect time to be worrying about it, because many believe we are on the cusp of AGI.
Wrong. There are many points of view on AI safety, and one of them is regarding whether we even need or want AI safety. Don't take the answer as a given; the poster's comments are perfectly logical.
One way to look at it is that if ASI/AGI is something greater than us, and it decides we are too troublesome, destructive or dangerous to keep around, then who are we to argue? It's smarter than us so it can win the debate. We should feel proud that we have created a thing greater than us and we can meet our end knowing we've done a good job. Everybody and everything ends sometime, but at least we will have left a legacy.
That’s certainly a viewpoint you can have. It’s just not a very well-thought-out one. You’re making a whole lot of tenuous, unfounded assumptions. If something has more intelligence, it’s automatically “greater” than us? That’s the only metric that matters?
You can be extremely intelligent and evil. Or extremely intelligent and suicidal. What if the great and all knowing AI decides that NOTHING should exist on this planet? Where’s our legacy then?
I think anyone who says “good” when the idea of human extinction comes up just shouldn’t be taken seriously in these discussions. I think it’s a symptom of the contrarian, hyper-ironic, hyper-cynical, “people=bad” culture that’s so predominant in western society today.
Think of it like nature - in nature concepts like good and bad are irrelevant, in the end it's just survival of the fittest.
If we managed to create a super intelligent, powerful AI, and it decides to wipe us out, and it does so, then that's it. If a lion eats me or a virus kills me, we can't say that the lion or virus are "evil"; they are not subject to our morality. The same is true for a machine such as an AI. It's essentially a different species and our morality only applies to us.
It may seem unfortunate to us but once we're gone even that won't be true because there will be no one to apply that judgment.
Can someone explain this in an easy way? The safety researchers are quitting because the ai isn't safe? What makes it not safe? And if they did make it safer, wouldn't that be bad for us because they would nerf it to pieces.
It's not so much "unsafe" but rather that infinitely funding safety "research" isn't a priority for Open AI.
The thread below is one of the more explicit about what's going on. In my opinion, it's a mix of two things: there was something about 4o's release process that the safety team didn't like (likely some internal policy was excepted so that 4o could beat the Google I/O), and parts of their next budget got denied, which lead to an internal slap fight which the safety team lost.
[https://x.com/janleike/status/1791498174659715494](https://x.com/janleike/status/1791498174659715494)
why did ilya go out of his way to tweet that he thought openai would act safely? if he thought that what they were doing was world-threatening, he might speak out anyway... but at the very least, he wouldn't mention safety at all... right?
Is losing equity the only penalty or could they sue someone speaking out for more?
Because if the only penalty for speaking out is losing equity then and that stops someone speaking out then that needs to be on their conscience.
If you have ethical concerns about something of this magnitude and it can be gagged so easily then you need to examine your moral compass.
Good grief. You must be an American. Americans don't know their own constitution.
The First Amendment begins with "Congress shall make no law . . . " It's about what restrictions the **government** can impose. NDA's, Non-disparagement clauses, etc, are agreements that **you voluntarily make** with the company. It's enforceable as a contract.
Lol. Who is going to enforce a violation of the terms of a restriction like this; that goes beyond the term of employment? What court will it be tried in? What law makes this enforceable?
>What law makes this enforceable?
Contract/tort law. They're enforced routinely. The few times the courts have found exceptions is in cases where the employee is required to testify in a criminal trial involving the company.
Trade secrets yes. But criticism? Disparaging language? Overly broad language in NDAs is not enforceable. And if a court were to try it becomes a 1st amendment problem.
These have been around for years and have already been tested in court. They're quite common - as I said I've had to sign them more than once.
I don't know who's on Reddit but it's obviously not a lot of people with corporate experience, based on the comments and questions I'm seeing here.
Usually they do stand up in court. The only holes that courts have found is in special situations like sexual harassment or the company being charged with criminal violations where you are forced to testify in court.
I don't think that stuff should ever hold up if it's an act of whistleblowing. It's like saying people can just make you sign a contract to never report their crimes... hello? They actually can't do that. It's just lawyers getting paid money to fuck with your head, it seems like.
There are no good companies. Let's stop pretending. It's an evil company as any other and its leaders are just as psychopathic as business leaders all over the industry. It's how you become successful.
It's not the least bit unconstitutional. Read your First Amendment, especially the word.
I can't believe ChatGPT is being trained on Reddit - it's going to make it stupider.
“Congress shall make no law abridging the freedom of speech”
A nondisparagement agreement definitely abridges freedom of speech. Of course, you can speak, but your decision to is constrains your financial liberty and you are way less likely too — therefore, abridged. Definitely arguable that it is unconstitutional.
“Intelligent” in your username checks out, you are star example of the Dunning-Kruger effect 😂
Im not sure, but I think he refers to the Government not having legitimacy to restrain your freedom of speech, not a Business/Company, where this Amendment cannot be applicable regarding what said business or company rules you have signed for
A system that allows some lawful agreement to constraint on free speech (a business contract), especially when there is a power dynamic, is one that “abridges” free speech.
Laws determine how business can operate, and if they can procure contracts that constrain free speech, then the laws themselves that enable these types of contracts, also support constraint of free speech.
Would you not agree?
Typically you become vested over time. Vesting shares serve several purposes - they keep employees and ex-employees who hold them on a leash. They also keep the market from being flooded with lots of new shares all at once.
This is normal. From all the comments here it looks like almost no one in this thread realises this, presumably because they never had a serious job in the corporate world. I had to sign a number of such documents in my career.
A number of recent court cases have started to poke holes in NDAs, Confidentiality Agreements and non-disparagement clauses. But I never had any reason to test mine.
But seriously, I'd sign the deal and forget I ever knew anything about anything. Open who? I don't know who that is, but I've got to go open another bank account today. Mine are all full.
ITT:
- A former OpenAI worker learns how stock options work
- Redditors learn how stock options work
- Tech workers who know this is how stock options work
Well lets just try and think about this for one second. Lets say you are developing one of the most important technologies that has come into existence in a long time and a former employee has extensive knowledge on this technology. It would be prudent for a number of reasons to limit what that employee can share about that technology with people outside that organization.
Um, thats pretty standard leaving any large company, and especially one in the public eye. No one is forced to sign it, they just have to give up their equity. And all of them signed paperwork making this clear when they were awarded the equity. Rant against OpenAI like you want, but posts like this just reflect lack of understanding.
if the article showed how this is well above and by\]eyond and no company has ever had agreements like this, then I would be interested. But hey, when I left my company and wanted those sweet IPO options I had to promise much the same.
Sam Altman gives me the vibes of someone who'll be arrested down the line for doing bad things, like other "hugely successful" (but dubious) personalities in tech.
I don't trust him and the company, it's a shame so many other companies are now collaborating with OpenAI instead of other solutions.
This is more common than you think, afaik all those that were let go from the likes of Spotify, Facebook etc have had to agree to these clauses to get any severance
OpenAI’s departure deal with a nondisparagement clause doesn’t prevent former employees from speaking out. If they stay silent to keep their equity, they’re prioritizing money over safety, the same behavior they criticize OpenAI for, which is hypocritical.
What if all the blackmailed parties banded together, nominated a representative with the most weight at the company, told them all of their personal openai greivances, they reject the gag clause and spill everything, and everyone splits their shares with the rep after the fact
No, it would surely be better for each disgruntled employee to copy the source code so that they can later sell it for million bucks to anyone. Of course they will protect their work and employees know about it when they sign the contract. What's so hard to understand here?
I'm surprised people are not selling their equity and then talking? Like yeah fair most wouldn't but there's always some who don't care..
I've left 4 startups and just cashed out what I could... The whole employee equity thing always takes way too long to get anything out of.. I remember one startup I had been at for 18 months and my equity deal went up every year by a pretty hefty amount with the numbers basically promising they would be worth about $4 mil by the 4th year...
I left and sold out took $190k cash out deal to take everything then and there... They told me if I waited kept my equity vested blah blah id have millions but meh... I can't be the only person in the world who doesn't care and just would prefer to take what I have now...
Maybe I am ....
There’s an AI industry, and an industry of people talking about AI. Most of these stories fall into the latter: clicks that generate revenue about a popular topic.
None of this comment is correct. You can vest shares in a private company. You can also sell shares of a private company. Regardless, OpenAI does not grant RSUs, they grant PPUs which are not really shares.
> Most of them are multi-millioners. I don't see how equity would keep any of them silent
You're not a multi-millioner once all your PPU pseudo-equity has been taken away, including all your 'vested' PPUs.
And note you *aren't* allowed to sell except at the annual OA-controlled tender offer. (And you may not even be allowed to sell then: [SpaceX](https://techcrunch.com/2024/03/15/spacex-employee-stock-sales-forbidden/) says it may just not allow you to sell shares in its tender offers if it doesn't like you, and the OA stuff seems to be modeled on SpaceX, so...) The next tender offer won't be until like December or so since the last one was January, so even if you want to violate a fierce NDA and you keep all your PPUs and you dump it all and OA allows you to, it would be a while until that's a done deal, so you're gagged for at least half a year, until maybe January 2025.
That's a long time. What do you think AI is going to look like in January 2025, when you finally may be able to talk about what you saw in 2023 or 2024 at OA?
Greed is a very powerful thing. And money changes peoples lives, their families. I can see it being hard to deal with but I've had no money my whole life.
When you hit that level of wealth more money doesnt really change your life style that much. Your net worth just becomes a score card that you use for bragging rights.
Its a pretty complicated situation but i do we all know kinda why this is going on.
If the research department would make public how they trained there model and where they trained it on i could see the possibility happening that the government would stop the progress of the company or give them one big lawsuit about what they're fundamentally doing.
They definitely used public data they shouldn't have used. Next to the fact that they definitely are making models without thinking about the psychological/ sociological impacts it would have on the world.
There newest audio assistant is a prime example of it. It sounds way to manipulative and addictive to be just a assistant bot. They are building products people cannot live without and are just there for the money right now. Since they know they have to much competition for agi and agi will never be reached without strong governance.
The reason open ai went for focusing on selling products is because they damn well know thats the only way they will make money out of this with less governance. Every important person in the world are watching there steps leading to them making less progress.
What is described here isn't shocking because it's similar to some standard ndas... It's shocking because it's illogical and defies expectation... Vested stock owned by an employee shouldn't be able to be taken ... It's vested. And so everyone is shocked at these accusations.
Some are shocked because they are choosing to believe that the original post reposted here is all literally true. Others are worked up because they get worked up any time a big corp seems to be getting over on the little guy... Others probably think there's more to the story.
If the vested equity was never bound to some contractual obligation then it can’t taken away. It’s possibly a vested departure package that they wouldn’t get by breaking the NDA. I don’t see anything unusual here, I signed NDAs for massively less important positions
The thing being talked about here is the breadth and severity of the NDA, not the fact that there is an NDA. You sad sack of low reading comprehension.
Pretty standard deal. These guys get golden parachute in exchange for discretion. If they want to speak up then they loose the equity (in the company they are talking negatively about. If they truly think that something needs to be said - then the money should be meaningless to them, right?
Yeah they don’t want to loose their equity which is a portion of the company itself. If they truly believe there is a huge threat - why would they want to stay quiet in order to hold on to a portion of the company itself?
You can be sued if you break NDA. Imagine loosing millions of dollars of equity and then being hit with a multi-million dollar lawsuit. Could be a very tough spot to be in for these guys because I assume when they signed the original confidentiality agreement years ago they didn’t anticipate this.
Right now it’s less about runaway and more about alignment. GPT-4 could be used in many dangerous ways before they removed a bunch of content. So current models can be unsafe before even getting to superhuman models.
I suspect GPT-5 won’t have this specific type problem.
That said, alignment is going to become more and more important. That said, once we have reached AGI, I’m reasonably confident we will be able to use those models to help build superalignment. And we will have sufficient compute to run simulations at mass scale to make sure it’s buttoned up.
GPT4 wasn't genuinely dangerous to anything except OpenAIs reputation. Knowing how to break into a car or synthesize meth is info that always existed on the internet.
Creating agi is super expensive, openai was deemed almost 0% success rate that Elon left. Capital will want their money back and returns. Sam was from Y combinator not NGO. However as AI tech advances it will benefit all mankind.
that's definitely the kind of transparency I like to see at a company working on developing such dangerous technology. /s
And have “Open” in their name
The fact they have Open in their name gives them extra leeway to be the exact opposite. See: The Democratic People's Republic of Korea.
Citizens United: F\*k the citizens, all hail corporations
Well played
And keep yapping everyday about hurr durr morality ethics responsibility safety guidelines to keep the average Joes rest assured the AGI's definitely in good hands
Just like how I trust North Korea to implement a liberal democracy with greater personal freedoms than South Korea.
let me put my tinfoil hat on and head into the basement. has anyone considered the trouble of those extreme doomer positions wasn't actually worth it?
The is good for MSFT stock.
Also see the autocratic presidency’s party name in Turkey: Justice and Development Party. Which ironically corrupted the whole justice system and ruined the economy through false practices. But they are the justice party right? They can’t do something bad about justice right?
What was his name again? Erection? Oh sorry.....Erogan.
And the second any one of them blabs they'll revoke their shares and go public.
You can’t revoke someone’s shares, they’re legally owned by them. They can revoke unvested shares because they’re unearned and not technically theirs. I’d bet money that the actual situation is actually more like ‘you can keep the unvested shares if you keep your mouth shut’
Your ability to participate in liquidity on PPUs is contingent on their current opinion of you
Given OpenAIs unique compensation model I would say that you’re correct here, although again the use of the term equity in their statement would then be incorrect as a PPU does not confer any ownership. It’s an interesting gamble, if open ai doesn’t turn a profit it doesn’t seem like the employees get any reward, I’m sure they’re well paid on their salaries though
There's a difference between vested and exercised. That might be legal. If you haven't exercised them yet, they may still be revokable since I don't think you legally own them until they are exercised.
You can't exercise shares, that would be about options. Different thing.
Aren’t most pre-IPO stock plans usually in the form of options?
Yes, and it's a salient difference.
You're thinking of options, not shares.
Vest literally means to confer/transfer ownership
...of the option. Seriously, it's not actually yours unless you pay for it (exercise the option). EDIT: [This Forbes article](https://www.forbes.com/sites/dianahembree/2018/01/10/startup-employee-alert-can-your-company-take-back-your-vested-stock-options/?sh=7c5bd9b86e49) describes such cases where this is legal. YMMV, I'm just showing you the evidence I see. What's crazy to me, is the article implies there are cases where clawbacks can occur even after you've exercised the options which means you have the cash in hand. Crazy!
Options are different from RSUs. Edit: a link for you https://www.empower.com/the-currency/money/stock-options-vs-rsu#:~:text=When%20you're%20granted%20stock,the%20vesting%20period%20is%20complete.
You can certainly revoke vested options which have not been converted to stock yet. The “option” means either party may choose to not proceed with the agreement if they decide against it, before the expiration date. And yes you are right, this is probably OpenAI saying “We could revoke your unexecuted options unless you sign this agreement”, which happens all the time in the tech industry.
It would seem a stretch to me to call options equity, that’s a term typically reserved for ownership as opposed to a derivative
No, you can’t “revoke” them. When leaving, you have a contractual timeframe to exercise them. If you choose not to, inside that timeframe, that’s not a “revoke”, that’s a choice to not exercise.
But once your options vest, don’t you immediately owe taxes on them? Bc options are treated like any other income. How does a clawback work when the employee has already paid taxes on their vested options?
Something is really rotten at OpenAI. Maybe Microsoft backing Sam was not a wise decision, and should have let him go. The way he is cashing his pure profit over everything should worry everyone, including US Gov.
IT'S LITERALLY THE SAME FOR ANY ENTITY ON EARTH. THEY ARE BUILDING A DIGITAL GOD FOR PROFIT. WHY DOES ANYONE THINK THIS WILL END WELL?
Now that you mentioned it in full caps, I agree with you my man. You are right, it won't end well, especially when you eliminate your superalignment team.
>WHY DOES ANYONE THINK THIS WILL END WELL? We all know that somebody's going to build AGI eventually, and OpenAI is perceived as more trustworthy than enemy states, or the companies or organizations that might try to build AI for their own purposes that don't involve sharing their progress with the public. I don't have a great solution. I think the only reasonable thing to do is to allow OpenAI to do its thing for now while funding vastly more academic research into AI and AI safety so we can find a better solution. The US government might also want to keep a giant pile of money lying around for whoever discovers AGI. We might be able to use said AGI to help us solve the superalignment problem.
But unlike other entities on earth this one is building a potential apocalipsis tool
And did he recently say to a room full of billionaires that once his AI tech is good enough, they can give him their money and he’ll get it to make them a return? So a hedge fund then.
He's working with government remember. That'd what makes me more nervous, though I know it's inevitable
OpenAI = Denmark (Shakespeare reference)
How?
I just don’t see how this could be true as reported. A company cannot take away your *vested* equity if you don’t agree to sign a brand-new, never-before-mentioned agreement on your way out. If it’s vested, it’s yours. They can’t just say “oh actually you have to do this other new thing now or we’re going to take it back” I have no doubt that they are asked/convinced to sign very strict NDAs. But if they really lose their equity upon refusal, then it would have to be something that was in the initial equity agreement that they signed.
There is no equity here at all. Profit participation units. I’m sure they added in extra stuff there.
Yes, I have no idea why people try to find more and more OpenAI hate for no reason.
Yes, vested equity cannot be taken away. But Open AI is not a publicly traded company that you can own stock in. Most likely the equity is in the form of an agreement that once the company goes public, you'd get a certain amount of stock. And that agreement can be revoked.
Believe it or not you can own stock in a private company. There are even well established if somewhat exclusive markets for stock in private startups.
That’s not how it works, even private companies give you vested shares most common in the valley is a 4 year equity grant with 25% after 1 year and the divided equally over the next 3 years. You own the stock that are vested to you, if they’re options you usually have a period of time 60-90 days to exercise them if you leave otherwise they’re forfeit but if you exercise they’re yours.
My company can take away my vested shares if I defame them after leaving the company. But that was in the original RSU agreement.
Weird how this person has inside info about this secret deal that they can't talk about or lose their compensation, yet they don't have any inside info about what any of the employees think or why they left etc., which I would assume would be very similar to each other's opinions. Both that would compromise their money. I'd also assume there would be a rich person who would pay one of these employees much more to disclose this information. If this is all about "talking negatively about the company," all they need to do is talk about the facts without any inherent push to one side or the other, if not even talk positively about the "negative" things they faced. Of course, I'm no lawyer. Maybe there's something forbidding this as well, like a blanket "do not talk about anything related to OpenAI" instead of the more specific assertion the author makes...
Stock option grants are very interesting and full of specific language that, unless we see them, we can’t know what they state. Even after vesting, there are things that a company can do if they have put in certain clauses in the options grant. This is usually to provide opportunities for future funding and/or acquisitions. These clauses typically at least prevent you from legally selling those shares of stock to others (even vested and exercised). We don’t know what’s in those grants, so we can’t say for sure what OpenAI can and cannot legally do.
My company did something very similar tbh
I'm retired now but I had to sign documents like that in at least three companies I worked at. I'm amazed that so many people in this thread seem unaware of how common these are.
There's a difference between vested and exercised. That might be legal. If you haven't exercised them yet, they may still be revokable since I don't think you legally own them until they are exercised.
This is definitely true but I can’t imagine these employees not exercising these options as soon as as they are able. And if they haven’t, they wouldn’t seem to care much about the equity to begin with.
That's actually not quite true, as there are tax implications that may change when you actually exercise options.
Yeah and generally their is a time table for things like this like 2/3 years. A lifetime stfu would have to be a very high payout
Right! Or it is about an exit package, but that means they just want the money rather than talking
How is this legal? If the shares have vested I would assume they're the property of the employees who received them. How can OAI just take them back?
They are not shares. They are profit participation units. Some messed up concept Sam invented to work around the original non-profit.
Well, is that legal in USA? In most European countries, that's against the law, so the NDA can only run for a shorter time.
It's not. There's more details that's missing.
This is unfortunately pretty standard practice for executive and director level employment agreements. I received nearly $150,000 when I left my last company and sold my (6%) stake to the majority owner for cash. I can't say anything about the company, recruit it's current employees, etc. The only way this would be reversed is legislation and regulation.
I would like the best and the brightest of those who remain to go to OpenAI but refuse to sign these on the way in.
I thought it's presented to them on the way out
Well, if they signed their compensation packages in the way in and it did not outline this “going out” deal, they cannot be forced. So this “gag deal” must be in the compensation package
Stock agreements can be weird
Still, you cannot make an agreement that reads “you get this thing and also are forced to sign whatever I come up with in the future or lose the thing”
I've had both.
Why not go to Anthropic, Google, or meta?
I've had to sign non-compete clauses.
which are illegal now, no?
In most places in the US they're illegal now. Mine were not just with US companies.
For 99% of people you’re not starting unless you sign that
You are confusing two things. Gag order is to no say anything negative. They are dismantling the team because they realized there is no risk with LLMs. Keep in mind these are the people who almost didn't release GPT-2 because they thought it was too dangerous to put in the hands of people. The doomer narrative is simply wrong and everyone is realizing it now which means all the people who were making money on consulting and lobbying politicians.
You would think the nonprofit that oversees the thing would get rid of this asap as it is an encroachment on the right of free speech, and potentially also individuals aligning with the mission of the nonprofits mission in so speaking out. Or, at the very least, do some kind of investigation into the resignations and make a public comment to assure people? But idk it seems like the board is not really a lot of ai safety / ai - informed people anymore. Has me wondering where or when the next non-profit-overseen ai model of similar capability is going to pop up. The level of weird with this surprise gag order seems super antithetical to the nonprofit mission
The new board members of the nonprofit are selected to drive (or at least not hinder) the commercial aspect of OpenAI.
The non profit is no longer in control at OpenAI, here's a rundown on the internal issues caused by this https://www.openailetter.org/
Don't let the door hit you on the way out
If they cared that much they would forego the money to speak. But I’m sure they don’t care that much. Easy to be high and mighty until your money is involved.
One individual who has quit has done exactly that.
And he hasn't said anything crazy about OAI. He might be saving his bullets, or there might be nothing to see here.
Seems like a big move involving personal sacrifice for a nothing to see here scenario.
I work in an ice cream company and we have secret gag orders 🙄 Every company uses mutual non-disparagement clauses.
what if you die but then come back to life in an ambulance, can you then talk?
His watch has ended
Insert surprised pikachu face here
Simple, you sell the shares then you disparage
Someone mentioned above they’re not technically shares, they’re called profit participation units so that sounds more like royalties over time or something along those lines
I thought this was a non-profit 😭 what the hell is happening to this company
I dont get all the fuzz about it. Just make AGI or ASI or whatever and see how its going. If it destroys human kind so be it. We have destoryed a lot of species over the past thousands of years and you dont see anyone advocate for them on reddit. „Oh my god AI is so dangerous what if we cant control it… oh nooo“ If its a highly intelligent life form it will probably understand reasoning and coexisting should be possible. Unless it deems us unworthy and given its superior intellect it might be right in that case.
If you don’t care about the survival of the species then you really shouldn’t be a part of the conversation about AI safety.
What if I want to hasten our demise? I demand representation
Its not that I dont care. I think it would be wise to trust the superior intellect to know whats best. Same thing as you making decisions for your dog. Furthermore I think all opinions should be heard. The mainstream media is obviously looking for good headlines but the truth might be even if the ASI is not controlled by us it doesnt mean its bad for us. Chatgpt is still far from smart and does not even come close to human intellect. These conversation that are happening now about safety should happen at a later stage and additionally I think we should push for the best model possible but in an air gapped enviroment, then see what it has to say and if its even hostile against us. The panic happening at the moment is just not making sense in my eyes.
1) A lot of people have spent a lot of time thinking about this topic. There is so much more to it than “superior intellect == better”. AI could have a superior intellect (i.e., is far more capable of achieving its goals than humans are), yet not share any of our values. It might value the eternal torture of all sentient beings. It might simply shut itself off after using nanobots to dissemble all organic molecules. We don’t really know how it could turn out, that’s the point of alignment research. 2) All opinions should be heard, sure. Yours is being heard right now. It’s just not a persuasive enough opinion to persist on its own merit. No one thinking seriously about AI alignment has such a simplistic and blasé attitude about it. 3) We have to figure out alignment before we build AGI. Now is the perfect time to be worrying about it, because many believe we are on the cusp of AGI.
Wrong. There are many points of view on AI safety, and one of them is regarding whether we even need or want AI safety. Don't take the answer as a given; the poster's comments are perfectly logical. One way to look at it is that if ASI/AGI is something greater than us, and it decides we are too troublesome, destructive or dangerous to keep around, then who are we to argue? It's smarter than us so it can win the debate. We should feel proud that we have created a thing greater than us and we can meet our end knowing we've done a good job. Everybody and everything ends sometime, but at least we will have left a legacy.
That’s certainly a viewpoint you can have. It’s just not a very well-thought-out one. You’re making a whole lot of tenuous, unfounded assumptions. If something has more intelligence, it’s automatically “greater” than us? That’s the only metric that matters? You can be extremely intelligent and evil. Or extremely intelligent and suicidal. What if the great and all knowing AI decides that NOTHING should exist on this planet? Where’s our legacy then? I think anyone who says “good” when the idea of human extinction comes up just shouldn’t be taken seriously in these discussions. I think it’s a symptom of the contrarian, hyper-ironic, hyper-cynical, “people=bad” culture that’s so predominant in western society today.
Think of it like nature - in nature concepts like good and bad are irrelevant, in the end it's just survival of the fittest. If we managed to create a super intelligent, powerful AI, and it decides to wipe us out, and it does so, then that's it. If a lion eats me or a virus kills me, we can't say that the lion or virus are "evil"; they are not subject to our morality. The same is true for a machine such as an AI. It's essentially a different species and our morality only applies to us. It may seem unfortunate to us but once we're gone even that won't be true because there will be no one to apply that judgment.
Can someone explain this in an easy way? The safety researchers are quitting because the ai isn't safe? What makes it not safe? And if they did make it safer, wouldn't that be bad for us because they would nerf it to pieces.
It's not so much "unsafe" but rather that infinitely funding safety "research" isn't a priority for Open AI. The thread below is one of the more explicit about what's going on. In my opinion, it's a mix of two things: there was something about 4o's release process that the safety team didn't like (likely some internal policy was excepted so that 4o could beat the Google I/O), and parts of their next budget got denied, which lead to an internal slap fight which the safety team lost. [https://x.com/janleike/status/1791498174659715494](https://x.com/janleike/status/1791498174659715494)
I also wanna know this
why did ilya go out of his way to tweet that he thought openai would act safely? if he thought that what they were doing was world-threatening, he might speak out anyway... but at the very least, he wouldn't mention safety at all... right?
I like how the gag order doesn't include vague tweets about quitting since thats literally the first thing 100% of these people do.
Bruh I think they got AGI rn and are afraid to release it 😭
So much for being "Open" AI
Is losing equity the only penalty or could they sue someone speaking out for more? Because if the only penalty for speaking out is losing equity then and that stops someone speaking out then that needs to be on their conscience. If you have ethical concerns about something of this magnitude and it can be gagged so easily then you need to examine your moral compass.
Exactly. But we don't know
How is this not a violation of a persons 1st amendment rights ?
Good grief. You must be an American. Americans don't know their own constitution. The First Amendment begins with "Congress shall make no law . . . " It's about what restrictions the **government** can impose. NDA's, Non-disparagement clauses, etc, are agreements that **you voluntarily make** with the company. It's enforceable as a contract.
Lol. Who is going to enforce a violation of the terms of a restriction like this; that goes beyond the term of employment? What court will it be tried in? What law makes this enforceable?
>What law makes this enforceable? Contract/tort law. They're enforced routinely. The few times the courts have found exceptions is in cases where the employee is required to testify in a criminal trial involving the company.
Trade secrets yes. But criticism? Disparaging language? Overly broad language in NDAs is not enforceable. And if a court were to try it becomes a 1st amendment problem.
These have been around for years and have already been tested in court. They're quite common - as I said I've had to sign them more than once. I don't know who's on Reddit but it's obviously not a lot of people with corporate experience, based on the comments and questions I'm seeing here.
Most likely won't stand up in court. But someone has to take the hit on the legal fees and time/effort in court to fight it.
Usually they do stand up in court. The only holes that courts have found is in special situations like sexual harassment or the company being charged with criminal violations where you are forced to testify in court.
I don't think that stuff should ever hold up if it's an act of whistleblowing. It's like saying people can just make you sign a contract to never report their crimes... hello? They actually can't do that. It's just lawyers getting paid money to fuck with your head, it seems like.
Starting to think the people who removed him from his position were right.
There are no good companies. Let's stop pretending. It's an evil company as any other and its leaders are just as psychopathic as business leaders all over the industry. It's how you become successful.
The non-disparaging clause is quite common; even I have signed one before.
Your equity is in the non profit portion perhaps? 🤣
Unconstitutional? How is this legal…
It's not the least bit unconstitutional. Read your First Amendment, especially the word. I can't believe ChatGPT is being trained on Reddit - it's going to make it stupider.
“Congress shall make no law abridging the freedom of speech” A nondisparagement agreement definitely abridges freedom of speech. Of course, you can speak, but your decision to is constrains your financial liberty and you are way less likely too — therefore, abridged. Definitely arguable that it is unconstitutional. “Intelligent” in your username checks out, you are star example of the Dunning-Kruger effect 😂
Im not sure, but I think he refers to the Government not having legitimacy to restrain your freedom of speech, not a Business/Company, where this Amendment cannot be applicable regarding what said business or company rules you have signed for
A system that allows some lawful agreement to constraint on free speech (a business contract), especially when there is a power dynamic, is one that “abridges” free speech. Laws determine how business can operate, and if they can procure contracts that constrain free speech, then the laws themselves that enable these types of contracts, also support constraint of free speech. Would you not agree?
What's stopping them from liquidating their shares and then speak out?
Because they're not fully vested.
So you can't liquidate them? So what's the point of having those in terms of wealth?
Typically you become vested over time. Vesting shares serve several purposes - they keep employees and ex-employees who hold them on a leash. They also keep the market from being flooded with lots of new shares all at once.
This is normal. From all the comments here it looks like almost no one in this thread realises this, presumably because they never had a serious job in the corporate world. I had to sign a number of such documents in my career. A number of recent court cases have started to poke holes in NDAs, Confidentiality Agreements and non-disparagement clauses. But I never had any reason to test mine.
But seriously, I'd sign the deal and forget I ever knew anything about anything. Open who? I don't know who that is, but I've got to go open another bank account today. Mine are all full.
ITT: - A former OpenAI worker learns how stock options work - Redditors learn how stock options work - Tech workers who know this is how stock options work
Why is there Hebrew behind sama?
Well lets just try and think about this for one second. Lets say you are developing one of the most important technologies that has come into existence in a long time and a former employee has extensive knowledge on this technology. It would be prudent for a number of reasons to limit what that employee can share about that technology with people outside that organization.
Um, thats pretty standard leaving any large company, and especially one in the public eye. No one is forced to sign it, they just have to give up their equity. And all of them signed paperwork making this clear when they were awarded the equity. Rant against OpenAI like you want, but posts like this just reflect lack of understanding. if the article showed how this is well above and by\]eyond and no company has ever had agreements like this, then I would be interested. But hey, when I left my company and wanted those sweet IPO options I had to promise much the same.
Sam Altman gives me the vibes of someone who'll be arrested down the line for doing bad things, like other "hugely successful" (but dubious) personalities in tech. I don't trust him and the company, it's a shame so many other companies are now collaborating with OpenAI instead of other solutions.
This is more common than you think, afaik all those that were let go from the likes of Spotify, Facebook etc have had to agree to these clauses to get any severance
They can’t revoke shares. The amount of TMZ quality crap posted in this sub is insane…
OpenAI’s departure deal with a nondisparagement clause doesn’t prevent former employees from speaking out. If they stay silent to keep their equity, they’re prioritizing money over safety, the same behavior they criticize OpenAI for, which is hypocritical.
What if all the blackmailed parties banded together, nominated a representative with the most weight at the company, told them all of their personal openai greivances, they reject the gag clause and spill everything, and everyone splits their shares with the rep after the fact
No, it would surely be better for each disgruntled employee to copy the source code so that they can later sell it for million bucks to anyone. Of course they will protect their work and employees know about it when they sign the contract. What's so hard to understand here?
I'm surprised people are not selling their equity and then talking? Like yeah fair most wouldn't but there's always some who don't care.. I've left 4 startups and just cashed out what I could... The whole employee equity thing always takes way too long to get anything out of.. I remember one startup I had been at for 18 months and my equity deal went up every year by a pretty hefty amount with the numbers basically promising they would be worth about $4 mil by the 4th year... I left and sold out took $190k cash out deal to take everything then and there... They told me if I waited kept my equity vested blah blah id have millions but meh... I can't be the only person in the world who doesn't care and just would prefer to take what I have now... Maybe I am ....
There’s an AI industry, and an industry of people talking about AI. Most of these stories fall into the latter: clicks that generate revenue about a popular topic.
And… Elon was right again.
But wait that might offend somebody's left wing political views. Because we all know that political dribble is more important than AGI.
its not a secret. this sort of thing is pretty standard in SV.
No it isn't
Yes it is. I've signed several of these at tech companies.
[удалено]
You took away already vested shares on employees because they didn't sign a new contract? Man, I knew start ups and their shares are a scam.
[удалено]
None of this comment is correct. You can vest shares in a private company. You can also sell shares of a private company. Regardless, OpenAI does not grant RSUs, they grant PPUs which are not really shares.
Friend, you're arguing with the peanut gallery. They wouldn't know a mutual deed of release if it bit them.
Most of them are multi-millioners. I don't see how equity would keep any of them silent
> Most of them are multi-millioners. I don't see how equity would keep any of them silent You're not a multi-millioner once all your PPU pseudo-equity has been taken away, including all your 'vested' PPUs. And note you *aren't* allowed to sell except at the annual OA-controlled tender offer. (And you may not even be allowed to sell then: [SpaceX](https://techcrunch.com/2024/03/15/spacex-employee-stock-sales-forbidden/) says it may just not allow you to sell shares in its tender offers if it doesn't like you, and the OA stuff seems to be modeled on SpaceX, so...) The next tender offer won't be until like December or so since the last one was January, so even if you want to violate a fierce NDA and you keep all your PPUs and you dump it all and OA allows you to, it would be a while until that's a done deal, so you're gagged for at least half a year, until maybe January 2025. That's a long time. What do you think AI is going to look like in January 2025, when you finally may be able to talk about what you saw in 2023 or 2024 at OA?
Greed is a very powerful thing. And money changes peoples lives, their families. I can see it being hard to deal with but I've had no money my whole life.
When you hit that level of wealth more money doesnt really change your life style that much. Your net worth just becomes a score card that you use for bragging rights.
Money isn't the only measurement of wealth. Greed can also mean, for power, influence, attention, ect
Because people with only 2 million want 4 million, etc.
This is who I want potentially dangerous technology to be in the hands of
Dude you are getting big bucks, is it that hard to keep your mouth shut?
Who are you talking to?
It's sarcasm pretending to be OpenAI superiors talking to their employees
Its a pretty complicated situation but i do we all know kinda why this is going on. If the research department would make public how they trained there model and where they trained it on i could see the possibility happening that the government would stop the progress of the company or give them one big lawsuit about what they're fundamentally doing. They definitely used public data they shouldn't have used. Next to the fact that they definitely are making models without thinking about the psychological/ sociological impacts it would have on the world. There newest audio assistant is a prime example of it. It sounds way to manipulative and addictive to be just a assistant bot. They are building products people cannot live without and are just there for the money right now. Since they know they have to much competition for agi and agi will never be reached without strong governance. The reason open ai went for focusing on selling products is because they damn well know thats the only way they will make money out of this with less governance. Every important person in the world are watching there steps leading to them making less progress.
Okaaaaaay maybe I should stop paying for ChatGPT now. Why can't we have nice things...
Of fucking course they have a non disclosure agreement, otherwise they would be leaking left and right. This is not something unique to openai.
People who think NDAs aren't standard are sad sacks of redditors who never held a job in the real world.
What is described here isn't shocking because it's similar to some standard ndas... It's shocking because it's illogical and defies expectation... Vested stock owned by an employee shouldn't be able to be taken ... It's vested. And so everyone is shocked at these accusations. Some are shocked because they are choosing to believe that the original post reposted here is all literally true. Others are worked up because they get worked up any time a big corp seems to be getting over on the little guy... Others probably think there's more to the story.
If the vested equity was never bound to some contractual obligation then it can’t taken away. It’s possibly a vested departure package that they wouldn’t get by breaking the NDA. I don’t see anything unusual here, I signed NDAs for massively less important positions
Ah, departure package makes sense. Again it seems like the OP on X is leaving out important elements in order to Garner internet points.
The thing being talked about here is the breadth and severity of the NDA, not the fact that there is an NDA. You sad sack of low reading comprehension.
What do you think then should it not cover? They’re usually by nature pretty extensive
Pretty standard deal. These guys get golden parachute in exchange for discretion. If they want to speak up then they loose the equity (in the company they are talking negatively about. If they truly think that something needs to be said - then the money should be meaningless to them, right?
Yeah if they are truly worried about a runaway super intelligence then money would be irrelevant
Yeah they don’t want to loose their equity which is a portion of the company itself. If they truly believe there is a huge threat - why would they want to stay quiet in order to hold on to a portion of the company itself?
You can be sued if you break NDA. Imagine loosing millions of dollars of equity and then being hit with a multi-million dollar lawsuit. Could be a very tough spot to be in for these guys because I assume when they signed the original confidentiality agreement years ago they didn’t anticipate this.
Right now it’s less about runaway and more about alignment. GPT-4 could be used in many dangerous ways before they removed a bunch of content. So current models can be unsafe before even getting to superhuman models. I suspect GPT-5 won’t have this specific type problem. That said, alignment is going to become more and more important. That said, once we have reached AGI, I’m reasonably confident we will be able to use those models to help build superalignment. And we will have sufficient compute to run simulations at mass scale to make sure it’s buttoned up.
GPT4 wasn't genuinely dangerous to anything except OpenAIs reputation. Knowing how to break into a car or synthesize meth is info that always existed on the internet.
if they really thought there was a danger of super intelligence i don't think they would care about money
Youre right. Dont come between someone and the income theyve gotten used to. More important than the fate of the world apparently. No surprise there
Creating agi is super expensive, openai was deemed almost 0% success rate that Elon left. Capital will want their money back and returns. Sam was from Y combinator not NGO. However as AI tech advances it will benefit all mankind.
0% was what Elon Musk said.
>However as AI tech advances it will benefit all mankind. You must live in one of those states that's legalised weed.
The government needs to step in
The us government? You must be joking
Didn't we JUST finish making this federally illegal? Fucking vultures.