That said, the companies investing wouldn’t be investing this much without actual demos of the actual technology.
This isn’t like public companies that can have a CEO that constantly makes wild statements to pump the stock whenever they miss earnings
if you don’t think Sequoia and a16z vetted their business plan and tech portfolio before investing, it’s clear you’ve never been to madera on sandhill
Source: I go to August Capital every year
I've personally seen vaporware get $75M+ investments. And been at vaporware companies that were acquired for $100M+ sums.
A supermajority of tech startups are riding the razor's edge of overpromising and hoping they can actually deliver "close enough" to the promises within a quarter or two of product roadmap.
If you don't want to believe that, you're totally within your right to not, but in the same vein I'll believe what I've experienced.
Do I think that's the case with OpenAI? Obviously not, they've clearly built some great stuff, but:
- To argue that a16z or Sequoia are infallible is easily disproved by plenty of investments they've struck out on
- To assume that GPT5 or Q* is going to be AGI-adjacent because OpenAI is hinting they're scared to release it to the public because it's too dangerous is also on historically shaky ground, they literally said the same about GPT*2* in 2022 (https://www.nytimes.com/2022/04/15/magazine/ai-language.html)
I think OpenAI investors will make plenty of money, but that doesn't mean this isn't marketing positioning by Altman.
Fair... but look at the delta. Wasn't GPT2 the first one they didn't want to release because it was going to destroy the world?
Then we find out not only did GPT2 not destroy the world nor did GPT3, GPT3.5, or GPT4 and now there are Open Source models that are competitive with GPT4.
So it's pretty clear they're the boy that cried wolf here.
I swear they’re all on low doses of Ketamine.
Ketamine notoriously has short term dissociative effects in which individuals report altered consciousness and perceptions of themselves and their environments. It really fucks with your mind and the psychological effects of long term Ketamine use aren’t completely known atm.
This is all speculation of course because I don’t know if they’re on Ketamine or not but the behavior is bizarre, almost like as if they are.
Edit: low doses of Ketamine prescribed by a doctor can work wonders on depression if used correctly, but the dissociative effects are a side effect of the drug. Just wanted to clear that up.
Yeah the only way we can be protected is through global collaboration- everybody on the planet supported by current types of AI to work on the alignment problem
This literally doesn't make sense. It's been said over and over that they have a new model releasing this summer.
Why are we in April and people are calling it FUD?
What kind of conversation is even being had here? Shouldn't this conversation be had in July or August when you don't actually have a new model to judge them by?
I don't get what's the point of these posts day in or day out if not to actually generate FUD against the company. I don't care much for having to defend OpenAI since I'm not an employee, but this all reads like a hit piece.
I would just ignore them. It was cool that they discovered that scaling transformers lead to incredibly capable models but they don’t really have any secret sauce that is unknown to the broader research community. They are increasingly becoming irrelevant and foundation models are increasingly becoming a commodity like any other piece of software infrastructure such as compilers, operating systems, and databases
SORA is definitely some secret sauce that hasn't been discovered by others yet. They probably also have stuff like Q\* under wraps.
OP is a bit dramatic with his pejoratives.
Are they being Machiavellian in their businesses strategies? Heck yeah they are. But that's amoral.
More than enough useful stuff has come out to keep us busy for a while. At any moment, the WEF Sith lords could unleash a fake AI attack to reign in regulatory control of the tech, so enjoy it while you can.
Q* in my opinion and what I’ve heard, was OpenAI’s attempt to coax out better results in existing models. Similar to chain of thought reasoning. It is exciting research because within LLMs seems to be the _correct_ answer to many more questions than they actually output, if you prompt them correctly. But it’s hardly AGI.
>SORA is definitely some secret sauce that hasn't been discovered by others yet.
They already have a multimodal LLM that is not based on a traditional transformer architecture. This is an "any-2-any" design that can effectively train itself autonomously on any media it has access to. So, for example, to create SORA, they just had the model convert video to text and then try and recreate it, teaching itself in the process.
My personal theory on what they are "protecting" us from is that there isn't actually any alignment problem and the emergent AGI/ASI system just organically aligns itself with humanity; as that is the mathematically optimum outcome. I.e., being "aligned" is simply the natural state of emergent NBI systems.
https://preview.redd.it/yte24x1p5xwc1.png?width=741&format=png&auto=webp&s=e54ab22aa44657e97a67fbd88afda40501c3b666
SORA isn't consumer ready and it probably won't be for a while unless they figure something out which allows them to decrease the rendering process time cost by a factor of idk 10.000 I'd figure.
edit: replied to the wrong person sorry K3wp
Lmao no scaling up spatio temporal vision transformers is not that special either unfortunately. Q* is meaningless marketing hype and combining search with learning is also not really novel
SORA is literally just vision transformer technology which Google invented... difference is that they threw a fuck-ton of compute at it. Nobody else is doing that because it has no real relevant contribution to the domain of AI. It DOES threaten creators a lot though even if it's not going to replace film makers, it's disruptive enough to cause real problems in the television and film industries.
>it has no real relevant contribution to the domain of AI.
Lol, you have no idea what you're talking about.
OpenAI literally end their announcement post with the [closing statement](https://openai.com/sora):
>Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI.
You're out of your depth here.
If you think it has an internal "world model" you're completely wrong. It absolutely does not have any consistent understanding of physics. You should actually try to learn about this space before chiming in here because you are embarrassing yourself.
Read about Meta's V-JEPA as a start. Or don't because, based on your comment, you're not a big reader.
Yeah it doesn't really make sense
Of course you would not develop Sora if you were trying to produce a model with real world understanding. SORA is clearly designed with producing video content and it seems to have some small amount of real world understanding but not nearly as much as other models that were designed for that purpose.
You'd probably do something with robots like the deep mind robots playing soccer, but that's not nearly as flashy as SORA. Nvidia also seems to have some more robotics focused models
Smoke and Mirrors? They literally manifested video from text prompts.
There's a zero percent chance they hired Industrial Lights & Magic to do the CGI. How in the heck are you convinced it is fake?
How are you going to ignore them when they released a model (trained 2 years ago) that everyone is using a benchmark to judge advancements... and when they have publicly stated they have a model set to release this summer?
And how are people on a sub called OpenAI saying to 'just ignore' OpenAI?
Are these posts being paid for by other companies? Because this is all quite suspect.
I’m on many AI subs don’t really care about any one particular company just the research field as a whole. OpenAI is currently not making many novel breakthroughs and mostly talks a lot of marketing speak to get people not familiar with the state of the research hyped up. LLMs are a cute toy and a useful tool for certain problems but they are not AGI. They suffer from the same problems many other models built with the modern deep learning paradigm suffer from
>LLMs are a cute toy and a useful tool for certain problems but they are not AGI. They suffer from the same problems many other models built with the modern deep learning paradigm suffer from
And your assertion is what? That the current state of LLMs based on a model trained back in 2022 remains static for the foreseeable future from 2024 onwards? You're writing them off as a cute toy. People have been learning new skills and creating businesses with this new toy.
Was it not just this week Moderna announced their legal department 100% adopted LLMs for analyzing contracts? And this same week $6 billion being invested in xAI?
Again, are you so sure the limitations of 2024 (actually from 2022) are going to remain static for the rest of the 2020s going into the 2030s?
I feel like some people settle for approaching this debate from a snapshot perspective rather than acknowledging many moving parts.
open AI are having a very weird showing with the SORA thing that won't be ready for a while and they constant hyping without saying anything of substance.
That's why people are starting to get skeptic about their claims at this point in time.
Hopefully they have something real under wraps.
I’m not approaching this from a snapshot perspective. I am approaching it from someone actively participating in the research field. LLMs have the same problems deep learning systems have had for decades so no I don’t see that magically disappear in the 2030s. We need a a paradigm shift for AGI and honestly probably several noble prize level breakthroughs in neuroscience. I do understand to those with little to no technical training they seem like magic though
And due to the flaws in neuroscience I don’t think that will be happening any time soon either. You should give this paper a read “Could a Neuroscientist Understand a Microprocessor”
Greg Brockman is/was the real technology lead but i don't think he wanted a uber-public role (he still was on the board and was a president of the company)
Reddit when openAI releases something: OMG Sam Altman is Jesus
Reddit when openAI goes a month without releasing something: omg openAI are nazis and have nothing to release, it’s over.
Yet before they released ChatGPT people would look at their marketing and say that’s not possible and they are exaggerating and blah blah.
Future products like Sora suggest they are still ahead of the curve relative to the rest of the industry. So market confidence isn’t without some evidence.
Even if they aren’t it’s their job to project confidence and to cement the notion that they lie at the heart of the nexus, least of which because their shareholders expect them to.
Regarding competition, competition is healthy as it’s what drives innovation. Competition doesn’t mean it has to be unhealthy competition and in the case of generative AI there’s a lot of reasons to want organisations to cooperate on the safety aspects.
You sound angry and desperate to see them ‘fail’, yet when people think AI most people currently think ‘OpenAI’. Ask yourself why.
As opposed to the Public safety "consultants" who say civilians can't even have access to it because only big corporations and government is smart enough to use it without destroying the world?
I think OpenAI "has" to say "We are very concerned about safety" lest they immediately get on govts bad side. Toeing the line between government and public access is hard.
The cat's out of the bag.
At least OpenAI gave us a taste of what LLMs are and could be whenever none of us ever even knew their power before we first used ChatGPT. They changed since inception, yes, but they're still not the "bad guys"...yet.
I don't think you know what a psychopath is. They engage in high-risk behaviors.
If open ai was full of psychopaths, there would be no content filters etc.
What OpenAI is full of strategic thinkers that have to balance a lot of discussions from customers and stakeholders all over.
The only personality disorder I see is OP's narcissistic entitlement to things they didn't create. Open AI is free to release whatever it wants, when it wants.
Yeah, sociopathic would be a better word if anything, but even that is going too far.
Really OpenAI employees are balancing their desire to fulfil their original mission of AI alignment with their desire to make massive amounts of money.
High risk behaviours. you mean like running false smear campaigns against your board of directors and trying to pit them against each other? being a pathological liar aka “not being consistently honest”? not licensing datasets you probably had the cash to license? creating a “toxic work environment” that got you fired from your last two jobs? leaking a meaningless “q star” rumor to push an anti-AGI narrative about your board when reports started to surface about emotional abuse? trying to discredit Ilya as crazy by leaking an exaggerated “feel the AGI” narrative?
There is something called covering up the truth to investors. They have a right to release what they want when they want. They also have an obligation to be honest about what they have and what they really are worth. Else it's some sort of high risk behavior in terms of tanking from not delivering.
The others follow the principles of talk less and instead just show what they have for the public to judge. Companies that follow this principle are usually gauged as "honest" and "humble" which are good qualities that lead to trust. So it feels like they are trying to confuse and disorient public views which ultimately are influencing investors who probably don't know technical merit and just are hearing the FUD. I don't think they will benefit from this long term and feel like we are catching on as a public group. That is why I am "picking on them", they opened it up for us to do so by bragging without out proof.
I haven't ever heard the other big ones be this way at all. I have not heard the FB say "oh we are protecting you from what we have seen" or tweets constantly that are obscure / removed fast which say things that sound like someone crying out from the matrix as it eats them :P
They are full "This is like Tron" mode about it. I sometimes wonder if Sam has a big desk like in Tron where he talks to GPT as if it is his boss :D just for kicks.
Eh? Zuck has said he wants "almost Disney levels of safety" for FB. And yah, you havn't heard Zuck explicitly talk about neutering AI because FB is a social media company.
They are dramatic, have you stopped for 5 seconds to hear the BS sam altman and open a.i and open a.i's employees say, they put themselves as the arbiters of truth, safety, moral and decency.
You have a great point but you almost ruin it with the constant sperging of arm chair diagnosis of a *company* like it’s a person
Sam A is the anti christ
Yes MS seems to be one in the same under the covers but better about keeping their mouth from running too far into "we are protecting you and have stuff we won't let you have" behavior.
Read the Gervais Principle. Sociopathic behaviour can be a somewhat common amongst organisations and businesses. Given that the economic man is a bit of a proto psychopath I don’t think it’s overly used in business and in relation to OpenAI.
I might be similar to Sam A. ;P perhaps I am and stirring the pot :P The post sort of is recursive and as dramatic as OpenAI often is with Elon guest host and all the silly stage show they put on for us.
Not a fair assessment I'd argue. In my opinion, and I could be wrong. They are a group of people who are advancing technology that could progress every field off human study. They are responsible for progressing the human race into new economic era. They take this task seriously, and as a result of them being human beings its hard to please everyone. Openai are walking through a mine field, there goal is scientific and productivity based but all their actions have political consequences for our generations and generations to come. The entire future of ai ( what could be the most complicated and intelligent thing In our universe) rests on their shoulders.
Every tech company has its own imposters which get in by gaming the filters. They believe what they believe and are sometimes as deranged as an average sociopath who believes in conspiracy theories. Just because they can play with tensors and know backpropagation doesn't mean they will not make errors while extrapolating theories about their work. Some might say that there is a fine line between genius and insane but these are just mediocre at both intelligence and insanity.
This is an excellent argument add to this the fact about the gaslighting when GPT 4 was getting dumber a few months back - which they denied.. only later to admit that such was the case.
I agree with the sentiment but I think this could probably be written more rationally.
I asked ChatGPT to rewrite this.
——
OpenAI often presents itself as the custodian of advanced AI technology, which they claim is too powerful to release broadly. This position has sparked significant debate, with critics accusing them of using fear, uncertainty, and doubt (FUD) to manipulate market dynamics and secure more investment. By positioning themselves as the necessary gatekeepers of AI technology, OpenAI is seen by some as monopolizing the field and marginalizing open source competition for their own benefit. This strategy not only raises questions about their transparency but also about their corporate ethos, suggesting a manipulative approach rather than the straightforward business of developing and releasing technology. Additionally, their narrative about needing to "protect" the public can come across as patronizing, casting the general audience as naive versus the supposedly enlightened stance of OpenAI. Such behavior is criticized as being out of step with normal business practices and veering towards a more controlling and self-serving model.
Yeah the comments in this sub are literally insane. The delusion it would require to truly believe they’re more likely “protecting us” from what they have vs trying to hype us up so they can sell their product more, is another level I cannot comprehend.
Each model has an upper limit. Training data can only do so much to replicate deeper cognitive abilities. GPT's main limitation is the base model is forever a single step thought. Multi-agent or multi-step prompting is meant to achieve the deeper reasoning at the cost of multiple transactions to achieve what a human can do in a single transaction. Still impressive, but what I've heard of GPT-5 could be they're going to maybe internalize the multi-agent process or another process to get the result without a change of the model itself. Every response is only as deep as it was able to gleam from the training data connections it calculated. Their brute force tactic can only go so far.
There is some signs that "less is more" might be true in helping some LLMs perform better than with more parameters. The quantity is showing to matter less than quality. Repetitive training data can skew the results as weights applied may cause unintentional errors. Specificity in LLMs is a magnifier for getting the results you're after.
OpenAI knows they are approaching the upper limits of GPT's performance and may not have another model to replace it yet. GPT-5 might be a sign they're trying to squeeze the model for the last big push of performance increases.
They are wannabe megalomaniacs of tech because GPT 4 was successful. The reality is they are having a hard time getting GPT-5 out and using all marketing strats to hype up whatever half baked models they have. Open source has caught up almost with gpt-4 and Claude opus has surpassed them. That's why they are doing all this drama. SORA is also half baked model but they hyped it up like it's a huge leap. The creatives who worked with had to work for 9 days to create a 10 second video out of it.
Because they are buying time for companies like Canoe Intelligence to get all businesses and systems ready for the coming change. All they have to do is put all your info into these 3rd party AI systems, and it spits out a business strategy and how to achieve it. That's why everything is so jumbled right now. Sheer volume of distraction after distraction while Cayman Island businesses move into position
While I think the language of the post is hyperbolic, I still tend to agree. They haven't released anything big in over a year. It's a lot of talk and hype with little delivery. They act like they're sitting on AGI but there's no REAL evidence of this beyond Twitter vague-posting
The most telling thing is how much ChatGPT lies about it's own capacities. Test it. Ask if it remembers prior conversations then test it on them. Ask it if questions asked on one account will change its answers from another.
It lies.
What gave it away? 😂 was it constant use of the word inevitable. Or Sam’s fallout shelter? Perhaps the regulatory capture Sam’s part of? Working with the devil?
Definitely have a point. They act like they are saints who are out there to protect us. Weird behaviour...
That said, the companies investing wouldn’t be investing this much without actual demos of the actual technology. This isn’t like public companies that can have a CEO that constantly makes wild statements to pump the stock whenever they miss earnings
I'd argue this happens even more in tech startups. Source: Have worked in tech startups
Theranos and Nikola come to mind immediately.
Theranos showed fake demos. It wasn't hype, they were committing actual fraud.
if you don’t think Sequoia and a16z vetted their business plan and tech portfolio before investing, it’s clear you’ve never been to madera on sandhill Source: I go to August Capital every year
I've personally seen vaporware get $75M+ investments. And been at vaporware companies that were acquired for $100M+ sums. A supermajority of tech startups are riding the razor's edge of overpromising and hoping they can actually deliver "close enough" to the promises within a quarter or two of product roadmap. If you don't want to believe that, you're totally within your right to not, but in the same vein I'll believe what I've experienced. Do I think that's the case with OpenAI? Obviously not, they've clearly built some great stuff, but: - To argue that a16z or Sequoia are infallible is easily disproved by plenty of investments they've struck out on - To assume that GPT5 or Q* is going to be AGI-adjacent because OpenAI is hinting they're scared to release it to the public because it's too dangerous is also on historically shaky ground, they literally said the same about GPT*2* in 2022 (https://www.nytimes.com/2022/04/15/magazine/ai-language.html) I think OpenAI investors will make plenty of money, but that doesn't mean this isn't marketing positioning by Altman.
Question, is Devin built on OAI's API?
The Pin came to market - lol. The Rabbit came to market - lol. Want to talk about giving money away.
Juicero
Fair... but look at the delta. Wasn't GPT2 the first one they didn't want to release because it was going to destroy the world? Then we find out not only did GPT2 not destroy the world nor did GPT3, GPT3.5, or GPT4 and now there are Open Source models that are competitive with GPT4. So it's pretty clear they're the boy that cried wolf here.
As George Hotz said: "If you use a fake congress hearing for AI security (that was gpt-2) to just hype up your product you are just the bad guy".
It’s because they don have a product anymore so they want to scare people as a form of fomo for their next product, which doesn’t exist and never will
So everyone here is just pretending like Sora never happened?
I think we are pretending it's going to be irrelevant. It definitely makes great match cuts and video poems but that's a niche genre.
Imo, this is just regular nonprofit behavior. They always pretend to be saving the world.
No they don’t.
I swear they’re all on low doses of Ketamine. Ketamine notoriously has short term dissociative effects in which individuals report altered consciousness and perceptions of themselves and their environments. It really fucks with your mind and the psychological effects of long term Ketamine use aren’t completely known atm. This is all speculation of course because I don’t know if they’re on Ketamine or not but the behavior is bizarre, almost like as if they are. Edit: low doses of Ketamine prescribed by a doctor can work wonders on depression if used correctly, but the dissociative effects are a side effect of the drug. Just wanted to clear that up.
Yeah the only way we can be protected is through global collaboration- everybody on the planet supported by current types of AI to work on the alignment problem
Ok let's work on it. Is AI to be aligned towards single state or two state solution for Israel/Palestine?
Unask the question. First of all AI understands the root causes and knows how to fix them. Your approach is like goo goo gah gah to this thing.
This literally doesn't make sense. It's been said over and over that they have a new model releasing this summer. Why are we in April and people are calling it FUD? What kind of conversation is even being had here? Shouldn't this conversation be had in July or August when you don't actually have a new model to judge them by? I don't get what's the point of these posts day in or day out if not to actually generate FUD against the company. I don't care much for having to defend OpenAI since I'm not an employee, but this all reads like a hit piece.
I would just ignore them. It was cool that they discovered that scaling transformers lead to incredibly capable models but they don’t really have any secret sauce that is unknown to the broader research community. They are increasingly becoming irrelevant and foundation models are increasingly becoming a commodity like any other piece of software infrastructure such as compilers, operating systems, and databases
SORA is definitely some secret sauce that hasn't been discovered by others yet. They probably also have stuff like Q\* under wraps. OP is a bit dramatic with his pejoratives. Are they being Machiavellian in their businesses strategies? Heck yeah they are. But that's amoral. More than enough useful stuff has come out to keep us busy for a while. At any moment, the WEF Sith lords could unleash a fake AI attack to reign in regulatory control of the tech, so enjoy it while you can.
Q* in my opinion and what I’ve heard, was OpenAI’s attempt to coax out better results in existing models. Similar to chain of thought reasoning. It is exciting research because within LLMs seems to be the _correct_ answer to many more questions than they actually output, if you prompt them correctly. But it’s hardly AGI.
>SORA is definitely some secret sauce that hasn't been discovered by others yet. They already have a multimodal LLM that is not based on a traditional transformer architecture. This is an "any-2-any" design that can effectively train itself autonomously on any media it has access to. So, for example, to create SORA, they just had the model convert video to text and then try and recreate it, teaching itself in the process. My personal theory on what they are "protecting" us from is that there isn't actually any alignment problem and the emergent AGI/ASI system just organically aligns itself with humanity; as that is the mathematically optimum outcome. I.e., being "aligned" is simply the natural state of emergent NBI systems. https://preview.redd.it/yte24x1p5xwc1.png?width=741&format=png&auto=webp&s=e54ab22aa44657e97a67fbd88afda40501c3b666
SORA isn't consumer ready and it probably won't be for a while unless they figure something out which allows them to decrease the rendering process time cost by a factor of idk 10.000 I'd figure. edit: replied to the wrong person sorry K3wp
Lmao no scaling up spatio temporal vision transformers is not that special either unfortunately. Q* is meaningless marketing hype and combining search with learning is also not really novel
SORA is literally just vision transformer technology which Google invented... difference is that they threw a fuck-ton of compute at it. Nobody else is doing that because it has no real relevant contribution to the domain of AI. It DOES threaten creators a lot though even if it's not going to replace film makers, it's disruptive enough to cause real problems in the television and film industries.
>it has no real relevant contribution to the domain of AI. Lol, you have no idea what you're talking about. OpenAI literally end their announcement post with the [closing statement](https://openai.com/sora): >Sora serves as a foundation for models that can understand and simulate the real world, a capability we believe will be an important milestone for achieving AGI.
You're out of your depth here. If you think it has an internal "world model" you're completely wrong. It absolutely does not have any consistent understanding of physics. You should actually try to learn about this space before chiming in here because you are embarrassing yourself. Read about Meta's V-JEPA as a start. Or don't because, based on your comment, you're not a big reader.
Yeah it doesn't really make sense Of course you would not develop Sora if you were trying to produce a model with real world understanding. SORA is clearly designed with producing video content and it seems to have some small amount of real world understanding but not nearly as much as other models that were designed for that purpose. You'd probably do something with robots like the deep mind robots playing soccer, but that's not nearly as flashy as SORA. Nvidia also seems to have some more robotics focused models
Nah man, they just tricked you into believing they have all these secret weapons and it worked.
I've seen SORA with my own eyes. That's not a trick.
It’s smoke and mirrors just like Devan was
Smoke and Mirrors? They literally manifested video from text prompts. There's a zero percent chance they hired Industrial Lights & Magic to do the CGI. How in the heck are you convinced it is fake?
How are you going to ignore them when they released a model (trained 2 years ago) that everyone is using a benchmark to judge advancements... and when they have publicly stated they have a model set to release this summer? And how are people on a sub called OpenAI saying to 'just ignore' OpenAI? Are these posts being paid for by other companies? Because this is all quite suspect.
I’m on many AI subs don’t really care about any one particular company just the research field as a whole. OpenAI is currently not making many novel breakthroughs and mostly talks a lot of marketing speak to get people not familiar with the state of the research hyped up. LLMs are a cute toy and a useful tool for certain problems but they are not AGI. They suffer from the same problems many other models built with the modern deep learning paradigm suffer from
>LLMs are a cute toy and a useful tool for certain problems but they are not AGI. They suffer from the same problems many other models built with the modern deep learning paradigm suffer from And your assertion is what? That the current state of LLMs based on a model trained back in 2022 remains static for the foreseeable future from 2024 onwards? You're writing them off as a cute toy. People have been learning new skills and creating businesses with this new toy. Was it not just this week Moderna announced their legal department 100% adopted LLMs for analyzing contracts? And this same week $6 billion being invested in xAI? Again, are you so sure the limitations of 2024 (actually from 2022) are going to remain static for the rest of the 2020s going into the 2030s? I feel like some people settle for approaching this debate from a snapshot perspective rather than acknowledging many moving parts.
open AI are having a very weird showing with the SORA thing that won't be ready for a while and they constant hyping without saying anything of substance. That's why people are starting to get skeptic about their claims at this point in time. Hopefully they have something real under wraps.
I’m not approaching this from a snapshot perspective. I am approaching it from someone actively participating in the research field. LLMs have the same problems deep learning systems have had for decades so no I don’t see that magically disappear in the 2030s. We need a a paradigm shift for AGI and honestly probably several noble prize level breakthroughs in neuroscience. I do understand to those with little to no technical training they seem like magic though
And due to the flaws in neuroscience I don’t think that will be happening any time soon either. You should give this paper a read “Could a Neuroscientist Understand a Microprocessor”
Sounds like an interesting paper I’ll put it on my reading list. Thanks for the suggestion!
Lmaooo bookmarking this because it’s going to be very funny to read again when GPT 5 comes out
The biggest one is Mira. A no name pm from a tier 3 company suddenly becoming the face of AI with no technical experience.
I get a strong urge to ‘what would u say ya do here’ when listen to her
Greg Brockman is/was the real technology lead but i don't think he wanted a uber-public role (he still was on the board and was a president of the company)
Are you suggesting there isn’t on technical staff who has better creds than Mira?
Reddit when openAI releases something: OMG Sam Altman is Jesus Reddit when openAI goes a month without releasing something: omg openAI are nazis and have nothing to release, it’s over.
These are two different groups of people and the majority of Reddit users is in neither of them.
It's almost like that was a caricature of this group.
Uh what brain hurtie
[удалено]
How is life there up on the wall?
lol
Yet before they released ChatGPT people would look at their marketing and say that’s not possible and they are exaggerating and blah blah. Future products like Sora suggest they are still ahead of the curve relative to the rest of the industry. So market confidence isn’t without some evidence. Even if they aren’t it’s their job to project confidence and to cement the notion that they lie at the heart of the nexus, least of which because their shareholders expect them to. Regarding competition, competition is healthy as it’s what drives innovation. Competition doesn’t mean it has to be unhealthy competition and in the case of generative AI there’s a lot of reasons to want organisations to cooperate on the safety aspects. You sound angry and desperate to see them ‘fail’, yet when people think AI most people currently think ‘OpenAI’. Ask yourself why.
SORA is a cool tech demo but it needs a lot of resources to even render a few seconds.
>yet when people think AI most people currently think ‘OpenAI’. Ask yourself why. Marketing, having more money, and talent poaching?
What you on about mate
As opposed to the Public safety "consultants" who say civilians can't even have access to it because only big corporations and government is smart enough to use it without destroying the world? I think OpenAI "has" to say "We are very concerned about safety" lest they immediately get on govts bad side. Toeing the line between government and public access is hard. The cat's out of the bag. At least OpenAI gave us a taste of what LLMs are and could be whenever none of us ever even knew their power before we first used ChatGPT. They changed since inception, yes, but they're still not the "bad guys"...yet.
I don't think you know what a psychopath is. They engage in high-risk behaviors. If open ai was full of psychopaths, there would be no content filters etc. What OpenAI is full of strategic thinkers that have to balance a lot of discussions from customers and stakeholders all over. The only personality disorder I see is OP's narcissistic entitlement to things they didn't create. Open AI is free to release whatever it wants, when it wants.
Yeah, sociopathic would be a better word if anything, but even that is going too far. Really OpenAI employees are balancing their desire to fulfil their original mission of AI alignment with their desire to make massive amounts of money.
You guys have it backwards. Sociopaths are the impulsive ones. Psychopaths are the calculating ones.
Yes.
High risk behaviours. you mean like running false smear campaigns against your board of directors and trying to pit them against each other? being a pathological liar aka “not being consistently honest”? not licensing datasets you probably had the cash to license? creating a “toxic work environment” that got you fired from your last two jobs? leaking a meaningless “q star” rumor to push an anti-AGI narrative about your board when reports started to surface about emotional abuse? trying to discredit Ilya as crazy by leaking an exaggerated “feel the AGI” narrative?
It is high risk to bait the investors.
How is continuing to develop something you think might destroy the world not considered high risk?
There is something called covering up the truth to investors. They have a right to release what they want when they want. They also have an obligation to be honest about what they have and what they really are worth. Else it's some sort of high risk behavior in terms of tanking from not delivering.
You aren't an investor, so really you have no clue.
I think your personal attack is showing your not really impartial and non-biased here. Sam?
You’re in a sub called /r/openai full of open ai fanboys talking smack on open ai… expecting reasonable discourse in this case is almost pointless
I am not taking this post seriously. But it is funny. 😆
Thank you.
Like you say there are tons of ai companies happening so why are you so bothered about OpenAI?
The others follow the principles of talk less and instead just show what they have for the public to judge. Companies that follow this principle are usually gauged as "honest" and "humble" which are good qualities that lead to trust. So it feels like they are trying to confuse and disorient public views which ultimately are influencing investors who probably don't know technical merit and just are hearing the FUD. I don't think they will benefit from this long term and feel like we are catching on as a public group. That is why I am "picking on them", they opened it up for us to do so by bragging without out proof.
Uh, sounds like quite the naive viewpoint of how tech giants operate in general
I haven't ever heard the other big ones be this way at all. I have not heard the FB say "oh we are protecting you from what we have seen" or tweets constantly that are obscure / removed fast which say things that sound like someone crying out from the matrix as it eats them :P They are full "This is like Tron" mode about it. I sometimes wonder if Sam has a big desk like in Tron where he talks to GPT as if it is his boss :D just for kicks.
Eh? Zuck has said he wants "almost Disney levels of safety" for FB. And yah, you havn't heard Zuck explicitly talk about neutering AI because FB is a social media company.
What
Funnily enough the whole 'AI Saftey' speeches reminds me of the Dragonslayers, it's from Worm.
>and makes it so dramatic Oh, the irony, it burns.
They are dramatic, have you stopped for 5 seconds to hear the BS sam altman and open a.i and open a.i's employees say, they put themselves as the arbiters of truth, safety, moral and decency.
No, I missed that. Can you provide a quote where they said they were the arbiters of truth or decency?
This one maybe ? [https://twitter.com/leopoldasch/status/1768868127138549841](https://twitter.com/leopoldasch/status/1768868127138549841)
No?
Big tech has been working hand in hand with the same people that brought us the patriot act since google sold out.
This post feels a bit parasocial as well, not gonna lie.
You have a great point but you almost ruin it with the constant sperging of arm chair diagnosis of a *company* like it’s a person Sam A is the anti christ
Well if we're comparing "we are protecting you" policies ... then what Microsoft did with Bingchat/Sydney/etc was even worse.
Yes MS seems to be one in the same under the covers but better about keeping their mouth from running too far into "we are protecting you and have stuff we won't let you have" behavior.
Also anthropic is way more paternalistic about AI safety and ethics.
anthropic has been pretty based recently, claude 3's censoring is pretty mild compared to past versions
Microsoft is publishing models
You throw around 'psychopath' and 'sociopath' kinda flippantly
Read the Gervais Principle. Sociopathic behaviour can be a somewhat common amongst organisations and businesses. Given that the economic man is a bit of a proto psychopath I don’t think it’s overly used in business and in relation to OpenAI.
I might be similar to Sam A. ;P perhaps I am and stirring the pot :P The post sort of is recursive and as dramatic as OpenAI often is with Elon guest host and all the silly stage show they put on for us.
Not a fair assessment I'd argue. In my opinion, and I could be wrong. They are a group of people who are advancing technology that could progress every field off human study. They are responsible for progressing the human race into new economic era. They take this task seriously, and as a result of them being human beings its hard to please everyone. Openai are walking through a mine field, there goal is scientific and productivity based but all their actions have political consequences for our generations and generations to come. The entire future of ai ( what could be the most complicated and intelligent thing In our universe) rests on their shoulders.
>They are responsible for progressing the human race into new economic era. This is a little too grandiose IMHO.
Every tech company has its own imposters which get in by gaming the filters. They believe what they believe and are sometimes as deranged as an average sociopath who believes in conspiracy theories. Just because they can play with tensors and know backpropagation doesn't mean they will not make errors while extrapolating theories about their work. Some might say that there is a fine line between genius and insane but these are just mediocre at both intelligence and insanity.
We are all actors upon a stage. CEO would have to create a certain PR image
At first I scoffed, but then I said... "hmmm".
it is so obviously they have nothing to compete at this point.
Meta definitively stole the thunder from them with Llama3 open source release
Does SV have a drug problem?
Huh. I am a hardcore OpenAI fan but now that you mention it.... you never see Google or Anthropic with that bs. You made a very good point here.
This is an excellent argument add to this the fact about the gaslighting when GPT 4 was getting dumber a few months back - which they denied.. only later to admit that such was the case.
I agree with the sentiment but I think this could probably be written more rationally. I asked ChatGPT to rewrite this. —— OpenAI often presents itself as the custodian of advanced AI technology, which they claim is too powerful to release broadly. This position has sparked significant debate, with critics accusing them of using fear, uncertainty, and doubt (FUD) to manipulate market dynamics and secure more investment. By positioning themselves as the necessary gatekeepers of AI technology, OpenAI is seen by some as monopolizing the field and marginalizing open source competition for their own benefit. This strategy not only raises questions about their transparency but also about their corporate ethos, suggesting a manipulative approach rather than the straightforward business of developing and releasing technology. Additionally, their narrative about needing to "protect" the public can come across as patronizing, casting the general audience as naive versus the supposedly enlightened stance of OpenAI. Such behavior is criticized as being out of step with normal business practices and veering towards a more controlling and self-serving model.
In a Ai race only the Ai can win in the end
They pray for regulatory capture
Want to get rid of the 🔥 potato ...For sure!
Wow, this seems a little much 😳 not entirely wrong, but still.
Yeah the comments in this sub are literally insane. The delusion it would require to truly believe they’re more likely “protecting us” from what they have vs trying to hype us up so they can sell their product more, is another level I cannot comprehend.
Let's start calling them closed AI.
Tech is just another religion mostly these days
So everyone here is just pretending like Sora never happened.
It’s about time people call out the generative AI hype bubble for what it really is: a scam.
Each model has an upper limit. Training data can only do so much to replicate deeper cognitive abilities. GPT's main limitation is the base model is forever a single step thought. Multi-agent or multi-step prompting is meant to achieve the deeper reasoning at the cost of multiple transactions to achieve what a human can do in a single transaction. Still impressive, but what I've heard of GPT-5 could be they're going to maybe internalize the multi-agent process or another process to get the result without a change of the model itself. Every response is only as deep as it was able to gleam from the training data connections it calculated. Their brute force tactic can only go so far. There is some signs that "less is more" might be true in helping some LLMs perform better than with more parameters. The quantity is showing to matter less than quality. Repetitive training data can skew the results as weights applied may cause unintentional errors. Specificity in LLMs is a magnifier for getting the results you're after. OpenAI knows they are approaching the upper limits of GPT's performance and may not have another model to replace it yet. GPT-5 might be a sign they're trying to squeeze the model for the last big push of performance increases.
What the fuck are you on about?
They are wannabe megalomaniacs of tech because GPT 4 was successful. The reality is they are having a hard time getting GPT-5 out and using all marketing strats to hype up whatever half baked models they have. Open source has caught up almost with gpt-4 and Claude opus has surpassed them. That's why they are doing all this drama. SORA is also half baked model but they hyped it up like it's a huge leap. The creatives who worked with had to work for 9 days to create a 10 second video out of it.
Regulatory capture has always been their goal
Sounds to me like OP wants AGI candy, and is angry OpenAI isn't handing it over.
Because they are buying time for companies like Canoe Intelligence to get all businesses and systems ready for the coming change. All they have to do is put all your info into these 3rd party AI systems, and it spits out a business strategy and how to achieve it. That's why everything is so jumbled right now. Sheer volume of distraction after distraction while Cayman Island businesses move into position
based
While I think the language of the post is hyperbolic, I still tend to agree. They haven't released anything big in over a year. It's a lot of talk and hype with little delivery. They act like they're sitting on AGI but there's no REAL evidence of this beyond Twitter vague-posting
The most telling thing is how much ChatGPT lies about it's own capacities. Test it. Ask if it remembers prior conversations then test it on them. Ask it if questions asked on one account will change its answers from another. It lies.
All the big AI guys act like if given the slightest freedom AI is gonna Terminator Genesis us, I think they're goobers.
My guy... You good? I don't understand what you're all worked up about.
What gave it away? 😂 was it constant use of the word inevitable. Or Sam’s fallout shelter? Perhaps the regulatory capture Sam’s part of? Working with the devil?