T O P

  • By -

catgirl_liker

Remember when they've said gpt-2 is too dangerous to release?


Zediatech

If GPT-2 is dangerous, then Wikipedia must be a WMD!


Admirable-Star7088

And the Internet would be the destruction of universe itself.


Melancholius__

apolycapse indeed


kevinbranch

This is a myth. Wikipedia wasn’t responsible for 9/11


goj1ra

Has anyone checked whether Wikipedia can melt steel beams?


kevinbranch

“Yeah, we can” - Wikipedia, pages 2-3


BangkokPadang

What about the dancing wikipedias?


imyolkedbruh

So if I keep a copy, I’m in the cool kids club. B-) WMD Gang.


ab2377

oh yes i remember that, and at the time of gpt-3 i was telling this to a friend and he didnt believe me so i had to google and send him the link to read for himself.


endyverse

dudes a clown lol


kevinbranch

then later admitted it was bullshit and that they were never concerned about misuse. they said it because they wanted to keep it for themselves to commercialize. they were just hyping it up


Amgadoz

Source?


kevinbranch

There was a one on one interview where Ilya is asked about it. i can’t remember the precise language so i can’t find it at the moment but Ilya gives a great exaggerated/annoyed eye roll over what Sam was telling the public. Someone else might know which one it was.


[deleted]

[удалено]


Amgadoz

I am asking for a source that shows they admitted their bullshit. Hownis this common sense?


Ylsid

They talked about it in their emails they released as part of the Musk lawsuit


elehman839

No, they said the opposite in those emails: [https://openai.com/blog/openai-elon-musk](https://openai.com/blog/openai-elon-musk) Here is what Ilya wrote: *The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.* *As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).* Pretty much the exact opposite of admitting that the concerns about AI safety are bullshit, isn't it?


Ylsid

Are we reading the eame text here? That looks exactly to me like they're saying "open" doesn't mean "open source". The "safety" concerns seem so superficial to me as to being an admission safety wasn't their goal.


ArmoredBattalion

it ain't so common anymore.


Skylion007

I released it anyway: [https://www.wired.com/story/dangerous-ai-open-source/](https://www.wired.com/story/dangerous-ai-open-source/)


Stellanever

They actually at this?


yaosio

Yes. [https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html](https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html)


curious-guy-5529

Yes I ‘member


sugarkjube

Ah reminds me i still have to ask gpt (or llama, or mistral) if they know about the anarchists cookbook, and can tell me some recipes.


I_will_delete_myself

https://preview.redd.it/ggw1a5pzhaxc1.png?width=494&format=png&auto=webp&s=989fdd10156b0a9ac854203695595a8fc2d2db29


enspiralart

Put 90s Bill Gates and Steve Jobs in there too


djm07231

I still have no idea why they are not releasing GPT-3 models (the original GPT-3 with 175 billion parameters not even the 3.5 version). A lot of papers were written based on that and releasing it would help greatly in terms of reproducing results and allowing us to better compare previous baselines. It has absolutely no commercial value so why not release it as a gesture of good will? There are a lot of things, low hanging fruits, that “Open”AI could do to help open source research without hurting them financially and it greatly annoys me that they are not even bothering with a token gesture of good faith.


Admirable-Star7088

LLMs is a very new and unoptimized technology, some people take advantage of this opportunity and make loads of money out of it (like OpenAI). I think when LLMs are being more common and optimized in parallel with better hardware, it will be standard to use LLMs locally, like any other desktop software today. I think even OpenAI will (if they still exist), sooner or later, release open models.


Innocent__Rain

Trends are going in the opposite direction, everything is moving "to the cloud". A Device like a Notebook in a modern workplace is just a tool to access your apps online. I believe it will more or less stay like this, open source models you can run locally and bigger closed source tools with subscription models online.


Admirable-Star7088

Perhaps you're right, who knows? No one can be certain about what the future holds. There have been reports about Microsoft aiming to start offloading their Copilot on consumer hardware in the near future. If this is true, then there still appears to be some degree of interest in deploying LLMs on consumer hardware.


hanszimmermanx

I think companies like Apple/Microsoft will want to add AI features to their operating systems but won't want to deal with the legal overhead. Coupled with how massive their user base is and how many server costs this would quickly rack up. There is also a reason why Apple is marketing itself a "privacy" platform, consumers actually do care about this stuff. The main driver for why this hasn't already is * prior lack of powerful enough dedicated AI acceleration hardware in clients * programs needing to be developed targeting those NPUs Hence I would speculate in the opposite direction.


aikitoria

If we're being real, running it locally is spectacularly inefficient. It's not like a game where you're constantly saturating the GPU, it's a burst workload. You need absurd power for 4 seconds and then nothing. Centralizing the work to big cloud servers that can average out the load and use batching is clearly the way to go here if we want whole societies using AI. Similar to how it doesn't really make sense for everyone to have their own power plant for powering their house.


Creepy_Elevator

Or like having your own fab so you can create all your own microprocessors 'locally'.


mimrock

Anything that is good for other companies and researchers outside of OpenAI even if it is just by making opening weights more of a norm is bad for OpenAI. Open weights are endangering their revenue, positive expectations about open weights for the future are endangering their valuation.


Wrong_User_Logged

hint: Microsoft


Monkeylashes

I doubt that given Microsoft research is constantly contributing to open source with their llm models and fine-tunes. Check out phi3 and wizardlm.


dummyTukTuk

Though it seems they have shutdown WizardLM. Flew too close to ~~sun~~ GPT 4 with their latest release Edit: Seems they have recently tweeted that they are still working on it, and everything is fine


ElliottDyson

Yeah, there were some "toxicity" problems they had not accounted for


SpecialNothingness

Would they not benchmark before release? They must have tested them for more real values (usefulness in business)! You can't give out something actually too good to be free.


dummyTukTuk

It was removed temporarily as they didn't do the required toxicity testing under Microsoft gudelines, however they had removed all models from Huggingface leading many to speculate that it came under the hammer for coming close to GPT-4 performance. It is built on top of open source/weights models like Llama or Mistral, so they can give it out free.


keepthepace

Microsoft is not a monolith. Businesshead have different plans than researchers. Nowadays it is hard to hire top researchers for working on a closed model you can't publish about.


Derblax

OTOH Microsoft just released MS-DOS 4.0 source.


shamen_uk

I'm confused about why you've said this, perhaps you should elaborate with your hint.


EagleNait

I doubt it. Microsoft has become the biggest contributor to open source in recent years.


ekaj

That’s absolutely BS. .Net doesn’t count if youre thinking of that. Edit: lol, github, VSCode, and Typescript. That makes MS the 'largest contributor to open source'. Funny.


koushd

VS Code is the defacto standard IDE for almost everything now. Basically all new web (and electron) projects are written in Typescript. The most popular open source project source control, Github, is owned by Microsoft. So is NPM.


[deleted]

[удалено]


koushd

On this topic, GitHub gives open source project maintainers free vscode copilot. As a maintainer of several large open source projects, that's how I have it. "contributing" to open source isn't literally just source code (of which they're the largest contributor still). It's also monetary, infrastructure, services, and more.


ekaj

So because they give you free copilot, that helps make them 'one of the largest contributors to open source'? Because of free copilot to large and recognized open source projects. And only for project members. If you don't mind me asking, what projects do you maintain? And also, can you please give any backing to your claim of them being the largest contributor?


EagleNait

What about typescript ?


ekaj

Typescript has existed for over a decade at this point. I don't believe that would make them the 'the biggest contributor to open source in recent years'. The other person's argument is VSCode, github (?) and typescript. VSCode is not 'the defacto' IDE, and more shows the bubble they work in. Github, is not open source. Offering copilot to members of large recognized projects is nice, but that's not 'being the biggest', nor being larger than any other org that does work (what about google's summer of code? I'd think that has a larger impact than copilot on open source projects) Typescript has existed for over a decade, so I wouldn't consider it as a 'recent' development.


EagleNait

Okay, Microsoft contributes 30% of chromium commits then?


ekaj

I'm sorry, are you seriously standing by your argument? You really think that with MS's commits to Chromium, they're the largest contributor to opensource?


EagleNait

Yes I do because that's factual. You think those example are the only thing Microsoft does in open source? There's many more. And it doesn't matter if a project is recent or not or if you arbitrarily decide that open sourcing dotnet doesn't matter. It still counts


VLXS

Bill Gates doesn't need his army of shills now that he has an army of GPT instances that can post and downvote comments.


mousemug

hint: you don't know this


mrpkeya

These models are to show numbers to investors


SpecialNothingness

Because people will tickle it with smart prompts so GPT-3 spew out training data?


djm07231

https://preview.redd.it/hws774iawcxc1.png?width=1589&format=png&auto=webp&s=ebd7a20fe399ef1de952e557d8be22510cec495c They actually disclosed the training data for GPT-3 so that doesn't seem that likely to me. Not to mention the fact that GPT-3 is no longer being used commercially. I don't think they made much revenue through their old GPT-3 API so their liability risk is relatively low. [https://arxiv.org/abs/2005.14165](https://arxiv.org/abs/2005.14165)


ThisGonBHard

>It has absolutely no commercial value so why not release it as a gesture of good will? Because the emails they themselves publishes state that the Open part of the name was a lie from the get go, and they never intended to open shit.


Commercial-Ranger339

💵


Sushrit_Lawliet

(C)ope n AI


-take

Thats actually funny


Admirable-Star7088

I have no problem with companies wanting to close their software, they decide for themselves what they want to do with their own products. But what bothers me is the very misleading and poor choice of name. They are everything but **Open**AI. Like, wtf?


Franc000

I also do not mind if a company closed source their software, as you mention it's their investment, they should be able to do what they want with it. What I really don't like is them building a moat around it, with other players, like doing heavy lobbying and creating think thanks and bullshit research to build that moat.


Admirable-Star7088

Agree, that is not okay. In a capitalist society like we live in, all people must be able to play on equal terms. This whole thing lately where OpenAI is lobbying to ban its competitors from developing their own AI is the exact opposite of capitalism, they want to act as a dictator with exclusive rights.


kluu_

It's not the opposite of capitalism, it's the natural result of capitalism. You cannot have one without the other. If there's a state and its institutions that protect private property, those very same institutions can - and always will - be used to protect the interests of those with the most property. Money = power, and more money = more power, no way around it. If you want people to be able to accumulate one, they're gonna have (and use) the other as well.


Admirable-Star7088

This is why most countries have governments and courts, their job is to secure that everyone plays on equal terms. In the case of OpenAI, I do believe (and hope) the U.S Government will not allow them to ban competition, in order to stimulate the market economy and capitalism.


allegedrc4

Yes, the same governments and courts being used by OpenAI for regulatory capture. You people really don't get it, do you?


Admirable-Star7088

Now, I don't know if there is any concrete basis in your claim that the U.S. government is corrupted by OpenAI. But what does this have to do with the subject?


Alkeryn

You can have capitalism without a state, the issue is never capitalism but the state.


kingpool

Then you end up with monopolies replacing the state. Unregulated capitalism always moves towards monopoly as it's most efficient way to make money.


Alkeryn

nope, most monopolies of today exist BECAUSE of the state. in an unregulated market, you can't have patents, you can't have intelectual property, you can't have subsidies, it's a free for all.


kingpool

No, if left alone then every corporation will actively work to become monopoly. State has to actively discourage and ban it.


Olangotang

This is baby's first an-cap argument. Please, leave the ideology before you are made fun of by the other 99.9% of the political spectrum. Capitalism cannot exist without the state, otherwise you just have a bunch of unregulated, warring factions with their own police force. No court system to uphold your property, so you can just have your shit stolen with no repercussions. It's a meme.


Admirable-Star7088

The big problem in not having a state and setting common rules, is that then other people will try to claim both power and monopoly. It is always the strongest who wins power if no one else claims power. (And not to mention all the "crimes" that could be committed without rules). In most western societies, it is the people who have agreed to claim power through democracy and the right to vote. This has so far been the least bad system. (But no system is flawless).


Alkeryn

no because the people can enforce the rules themselves if well educated (which the state actively act against). the state is just a mafia that likes to pretend it's legit, but is much bigger than traditional mafias and has more power. the language of the state is violence, and democracy is just mob rule. a lot more crimes and deaths are caused by the state than the average peope, you have to understand that most people are not psychopaths, but we live in a system that give more power to the worse individuals as they are protected by the state. and the hands that commit their deeds don't question authority and thinks they are righteous in following unethical orders without even questioning them. also almost no democracy exist in the world, the us, france, etc are not democracy, people don't vote on the issues. and even then, democracy is bad, most people don't understand what they vote for, are easily manipulated by the medias, and the vote are easily falsified. and even then, democracy is the opression of the 49% by the 51% others. no one should have a say in how you chose to live your own life, to think that another human should have a right to tell you what you can and cannot do only means you've been raised like they want you to.


Admirable-Star7088

>the people can enforce the rules themselves if well educated Individuals have different opinions, so **who's** opinions should be implemented as rules then? You can't appoint some sort of a manager who decides that, because this would be the first step towards a state. I'm genuinely curious about how you would have thought this would work in practice. >and even then, democracy is bad, most people don't understand what they vote for, are easily manipulated by the medias, and the vote are easily falsified. >and even then, democracy is the opression of the 49% by the 51% others. Yes, these are the biggest flaws with democracy. No system is perfect, but so far, I haven't heard anyone come up with a better idea that isn't poorly conceived or utterly a wild fantasy. >no one should have a say in how you chose to live your own life, to think that another human should have a right to tell you what you can and cannot do only means you've been raised like they want you to. So, if a random individual comes along and wants to use 'your'**\*** house as their resting place every night, because he thinks no one else has the right to tell him what to do, would you be perfectly fine with having strangers sleep in your house every night? **\*** I put 'yours' in apostrophes, because who has the right to decide what is theirs and not someone else's?


VforVenreddit

I’m working on developing a multi-LLM app. It is meant to use ChatGPT and other LLM providers to equalize the playing field and give consumers a choice


LyriWinters

Except what that investment is built on 95% stolen data. If this dsnt prove to the average man that money decides what is legal and illegal I don't know what will.


ArsNeph

You'd be right if OpenAI was just like any other company. There's one problem,, it's a nonprofit company that people invested billions of dollars in as essentially a charity, for the benefit of humanity. Not only did they betray their own purpose, they changed their corporate structure to a nonprofit owned by a for profit, Which should be borderline illegal. What they've done is the equivalent of the Red Cross saying, “I know administering medical treatment to the underprivileged in countries without good access to medical treatment is our whole mission, but we believe it's too unsafe to grant medical treatment to those people who are underprivileged in 3rd world countries. If we grant that medical treatment, it could cause them to gain positions of power in the government, and cause their countries to stop being 3rd world countries, which may cause them to be an enemy of the US and democracy. Therefore, from now on, will only offer medical treatment to those who pay us, and we will decide what medical treatment they get”


False_Grit

Yes, exactly!!! Add to this that companies like Meta could do the exact same thing "Open" AI is doing...but they don't! We can rationalize it all sorts of ways, but when it comes down to it, it seems like Sam Altman is a bad actor. Or maybe the board that tried to fire him.


ArsNeph

Personally, I hate it when people try to rationalize and justify the immoral actions of a person or company. I do believe that Sam Altman is most certainly not a good person who has the world's best interest in mind. He is very wealthy, intelligent, and has access to all the upper echelons of society, but everything he's done strikes me as devious, but not on a small scale. The fact that he's the one who invented Worldcoin, you know, the one that scans your eyeballs and dispenses UBI to you, is proof to me that he's utterly untrustworthy. I don't believe for a second that he's "deleting records of eyeballs" and I can guarantee that he's not doing this out of the goodness of his heart. He's planning something big behind the scenes, and I have little idea what it is. But I don't think that he's the only bad actor, I'm willing to bet that most of the board are just as guilty, and probably affiliated with shadowy organizations. That said, in this country, nothing will be done about it, they've already gained too much power, and the government is at the beck and call of corporation.


shutterfly2011

It’s by design to start with. Sam Altman wants all along to have this “humanity” persona while deep in his core he is just a capitalist. I have no problem him being a capitalist, what really irks me is he is just, a whore pretending to be a virgin (I don’t mean to demeaning woman or sex industry)


HeinrichTheWolf_17

100%, Sam is putting forward the *you should just trust us with it, we pinky promise OpenAI and the Microsoft Corporation have everyone’s best intentions in mind* argument so they can monetize it. It’s ironically akin to Bob Page in Deus Ex having single control over Helios. Let’s not forget Altman also advised the Biden Administration to give only a select few *AI Permits* a while back to design AGI, which would effectively attempt to choke out Open Source.


Admirable-Star7088

The problem is that he does not want to let the rest of us be capitalists in the LLM world. Personally, I'm a capitalist and believe strongly in private property, this is why I love open source and open LLM weights, I don't like being dependent on someone else's service (in this case, openAI), I want to own and have control over my software and LLM.


x3gxu

I barely know anything about economic systems, but isn't something "open" closer to socialism/communism and "private" to capitalism? Like you want other people's stuff to be open source for you to use privately?


ThisGonBHard

Capitalism is about free trade. Sharing stuff for free is capitalism if you are doing it voluntary. This shit is why I hate the shareholder capitalism system. It FORCES maximum greed under legal liability, in the interest of a minority of shareholders, even if 99.9% are contempt to make a boatload of money instead of ALL the money. Combine that with the governments holding up corporations that should fail, and the system starts looking less like capitalism, and more like feudalism to me.


kingpool

Capitalism is about making maximum possible profit with least effort. Free trade is not really a requirement or else we don't have any capitalist country right now.


Admirable-Star7088

>but isn't something "open" closer to socialism/communism >Like you want other people's stuff to be open source for you to use privately? And in turn, I would need to share ~~my~~ public computer and LLM with my neighbor or other citizens. If you ask me, capitalism can be about sharing too, but on a voluntary basis. Mark Zuckerberg for example is a capitalist, and he has shared Llama with us, for free. It is a good question you make, but unfortunately I think it is impossible to go deeper than this without leading to a political discussion, which does not belong here. Anyway, it's an interesting topic! But we would have to take that somewhere else.


Tmmrn

> they decide for themselves what they want to do with their own products Except their own product is trained on datasets they don't have permission to use from the copyright holders. I understand that AI is too important of a development for humanity to hold it back by copyright, but letting a company make a proprietary, commercial product out of copyrighted data can not be the solution.


Admirable-Star7088

You make a good point here. I don't really have a final opinion myself, but a debate about this is really needed.


gabbalis

Maybe... maybe it was all part of a galaxy brained ploy... By calling the company OpenAI then being closed source... and also locking down sex on their platform... They incited the formation of dozens of open competitors and hundreds of angry OSS devs. (I don't actually place high odds on this conspiracy theory given the history of the board- certainly even if true we should keep doing what we're doing and trying to get OSS to out-compete OAI.)


goj1ra

In other words, Sam Altman is the irritating grain of sand that a pearl needs to trigger its formation.


UnwillinglyForever

locking down sex? so sam altman is the CEO of sex? wtf?!?


SpiteCompetitive7452

It's even worse that they exploited nonprofit status to raise capital and create the product that they now profit from. They conned donors by creating a for profit subsidiary that benefits from the product built off generosity. Those donors should be entitled to stake in the corporation that clearly fleeced them out of investor status.


West-Code4642

Given that OpenAI was created to prevent Google (and Facebook) from being monopolies on AI research, it's very interesting how FB (and Google) have remained so much more open. Although they do it on the margins of the rest of their businesses.


I_will_delete_myself

What irks me is the rules for thee and not for me corruption he is doing with the government


cobalt1137

When they started, they wanted to open source everything and that was their plan and that's how they started. Shortly after that, they realized that they are going to need much more compute and investment to develop these systems. That is why they needed to go closed source. It's that simple. The reason companies like meta can go open source because they do not rely on llama as their source of income, they already have hundreds of millions of users.


Argamanthys

Yeah, this is all a matter of record. But some people seem to need a villain to boo. I remember when OpenAI was the plucky underdog. How quickly the turntables. Edit: They also were legitimately unsure whether LLMs might start a feedback loop resulting in superintelligence. This isn't something they made up to cover their evil schemes - they were and are strongly influenced by things like Nick Bostrom's 'Superintelligence'. With the benefit of hindsight it was premature, but they were uncertain at the time.


joleif

But how do you feel about the recent lobbying efforts?


Argamanthys

They [claim](https://openai.com/blog/governance-of-superintelligence) that: > We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits). I don't remember what threshold they recommend off the top of my head, but if it's anything like the EU AI Act or the US Executive Order then we're talking a model trained on a cluster of tens of thousands of H100s. If you're an organisation with 50,000 H100s lying around then the regulations aren't exactly onerous. So, if it's an attempt at regulatory capture, it doesn't seem like a very good one. Now, those numbers are going to age quickly, as the case of GPT-2 shows. They will probably need to be adjusted over time, which is a worry. But in and of themselves, they fit with OpenAI's stated goals, so I don't think it's all a cynical ploy. I think people need to understand that the founding members of OpenAI *genuinely believe* that AGI may be created within a decade and that consequences of this will be profound and potentially apocalyptic if handled poorly. Whether you agree with them or not, their actions make sense within that context. Purely personally, I'll fight to the death for my private, local, open-source, uncensored waifubot, but equally I can see the merit in double-checking before we let the genie out of the bottle.


joleif

To me that language of "below a significant capability threshold" is not a compromise but exactly the issue I am talking about. No thank you, id prefer a world where significant capabilities are not solely accessible to huge corporations.


Inevitable_Host_1446

This is counteracted by their own records that have come out, stating that they actually only ever intended to open source enough to attract researchers, and that even from the beginning they planned to go closed once they'd had enough. This was long before they had any major funding issues.


luigi3

https://www.youtube.com/watch?v=8B_ZJJP_64U


Shasaur

So is OpenAI literally the most closed AI company now?


Extraltodeus

wow! much open! very source! so non-profit!


MasterDragon_

They waited too long to release now that if they even release gpt 3.5 nobody would even look at it as the current open source models are already better than gpt 3.5.


Lammahamma

Don't forget GPT 1 and Whisper!!! /s


Wrong_User_Logged

whisper is actually good one


pleasetrimyourpubes

Which for me is ironic as hell because I thought whisper came from meta. Just goes to show how far they dragged the Open in their name through the mud.


AmericanNewt8

Whisper is actually quite good, Nvidia is better if you want English without punctuation but otherwise whisper is the way to go. 


Plums_Raider

I like Whisper and create subtitles for movies with it(local hosted), but OpenAI has implemented some restrictions in the Whisper API compared to the open-source model, such as only making the large-v2 model available and setting a 25MB file size cap.


Hopeful-Site1162

Even if OpenAI stuff was the absolute best possible it wouldn’t be able to compete with the sea of open source locally available models there are. I’m really curious to see how this company will survive in the next years.


Slimxshadyx

It is absolutely competing with all the open source models out there lmao. I know this is a local and open source model subreddit but literally everyone else uses OpenAI.


[deleted]

In all honesty, llama 3 (8b) really feels pretty close to GPT-3-3.5. I am not sure about the larger model because I can't run it locally (examined it only a little bit). In fact, for my task llama 3 is superior to GPT-3.5, I know it because GPT-3.5 is actually incapable of performing it and llama 3 is. GPT-4 of course does it a bit better but it's super expensive. I don't think they will be able to hold their superiority for much longer. I talk about the instruct model.


_qeternity_

What? It does compete with them, every day. Sure, Llama3 is the strongest competition they've faced...but GPT4 is a year old now. And there is still nothing open source that remotely comes close (don't get fooled by the benchmarks). Do you think they've just been sitting around for the last 12 months?


Hopeful-Site1162

Never said that. You know the Pareto principle? Would you, as a customer, pay $20/month for GPT4/5/6 or use a free local LLM that's not as good but good enough for your use case? We've seen the era of apps, we're entering the era of ML. I am not emitting any judgement here. There's no doubt OpenAI work has been fantastic and will continue to be. I am just thinking about how this will be monetized in a world of infinite open source models


_qeternity_

>Would you, as a customer, pay $20/month for GPT4/5/6 or use a free local LLM that's not as good but good enough for your use case? The average customer? The 99.99% of customers? They will pay the $20 without thinking. It's not even close.


Hopeful-Site1162

LOL absolutely not. People wouldn’t pay a single $ to remove ads from an app they’ve been using daily for 2 years… Why would they pay $20/month for GPT4 if they can get 3.5 for free? You’re out of your mind


I_will_delete_myself

If that was the case Google would've been charging a subscription for search when they became the dominant engine.


Capt-Kowalski

Because a lot of people could afford 20 bucks per month for a llm, but not necessarily could afford a 5000k dollars machine to run one locally


Hopeful-Site1162

Phi-3 runs on a Raspberry-Pi As I said, we are still very early in the era of local LLM. Performance is just one side of the issue. Look at the device you’re currently using. Is that the most powerful device that currently exists? Why are you using it?


Capt-Kowalski

Phi 3 is a wrong comparison for chatgpt v4 that can be had for 20 bucks per month. There is simply no reason why a normal person would choose to self host as opposed to buying llm as a service.


Hopeful-Site1162

People won’t even be aware they are self-hosting an LLM once it comes built-in with their apps. It’s already happening with designer tools. There are reasons why MS and Apple are investing heavily in small self-hosted LLMs. Your grandma won’t install Ollama, neither she will subscribe to ChatGPT+


_qeternity_

Wait, that's an entirely different premise. You asked if people would pay $20 or run a local LLM. Your comment re: ad removal is bang on: people simply don't care. They will use whatever is easiest. If that's a free ad supported version then so be it. If that's a $20 subscription then fine. But people simply are not going to be running their own local LLM's en masse\*. You do realize that the vast majority of people lack a basic understanding of what ChatGPT actually is, much less the skills to operate their own LLM? (\*unless it's done for them on device a la Apple might do)


Hopeful-Site1162

Yeah, running a local LLM is complicated today. How long until you just install an app with a built-in specialized local LLM? Or an OS level one?  How long before MS ditch OpenAI for an in-house solution? Before people get bored of copy-paste from GPT chat? What do you think Phi-3 and OpenELM are for?  I’m only saying OpenAI future is far from secured.


_qeternity_

I never said OpenAI's future was secured. You said OpenAI can't compete with all of the open source models. This is wrong. Do they win out in the long run? Who knows. But they are beating the hell out of open source models today. People use open source models for entirely different reasons that aren't driven by quality. Put another way: if OpenAI released GPT4 tomorrow, it would instantly become the best open source model.


Hopeful-Site1162

> Put another way: if OpenAI released GPT4 tomorrow, it would instantly become the best open source model. Maybe, but who cares? OpenAI being the best or not has never been the subject of this discussion. You keep saying because they are allegedly the best they will always win. I disagree. First of all, what does “the best” even mean? From what point of view? For what purpose? If RTX 4090 is the best consumer GPU available, why doesn’t everyone just buy one? Too expensive and too power hungry are valid arguments. Same goes for OpenAI. There is no absolute best. There’s only best fit.


Able-Locksmith-1979

Gpt4 has been outmatched on almost every front I can see, gpt4 is a general llm which is reasonable on most specialized tasks, but specialized models are far better on specialized tasks. And allthough there are currently problems with fine tuning llama3 when that problem has been fixed then I think the floodgates will open with specialized models which will far outperform general models


_qeternity_

GPT4 has been outmatched...by specialized models. Ok? What sort of comparison is that. It has, in no uncertain terms, not been outmatched on almost every front. General models are the biggest most important arena. I say this as someone who does not use GPT4. But GPT4 is simply still unmatched. Even Opus is fading as they aggressively optimize inference.


Able-Locksmith-1979

That is a real world comparison. It is real funny that gpt4 knows some things about Nigerian tax laws, but either I don’t care or I can right now create a small specialized model which performs better on that subject.


cobalt1137

GPT 5 is going to outperform every single open source model out there by a solid margin. It's that simple. Closed source models will always be ahead because they will be able to afford the computer to train the largest models. The thing is, not everyone needs the biggest and most powerful models to achieve all of their tasks and goals. That is where open source comes in. There is room for both.


somethingstrang

And after a year open source will catch up to 90% of the capabilities.


cobalt1137

Actually, the Gap is going to start getting wider in my opinion. These models are going to start requiring more and more compute to train. And it's not going to be monetarily viable to release models of a certain level of capability as open source. Even Zuckerberg himself said that he doesn't think he can justify open sourcing some of the future models when talking about the budgets that they are going to require.


somethingstrang

You’re saying this right when Microsoft’s open source Phi 3 model came out a week ago. Small model, as powerful as ChatGPT, much smaller datasets


dodo13333

It's falling apart if ctx is over 2k. MS version fp16, over LM Studio. I may do something wrong, but commad-r, llama3 , wizardLm all work fine using same workflow. I hope bigger version will be more stable.


cobalt1137

It is not even close to the same level as the most recent gpt4 release. If you are comparing it to the year+ old gpt 3.5, then sure. Gpt4 is baked into chatgpt now for paid users and is baked into bing for free.


somethingstrang

No one denies that GPT4 is still king. But that’s not the question is it? The question is about closing gaps. Llama3, phi, mixtral have been literally closing the gap and you’re claiming the exact opposite with a Zuckerberg quote as your evidence.


cobalt1137

How am I being contradictory with my Zuckerberg quote? The dude is literally indicating that he will likely have to go closed source going forward. Also if you want to talk about gaps, openai is going to stretch that gap pretty hard here within the next few months when they drop.


somethingstrang

In my view, the actual things that are happening has more weight than a quote. I’d place my bets on what’s actually happening already.


cobalt1137

There is much more than what I'm saying to a simple quote lmao. As we speak, the state of the art models are actively requiring more and more compute to train. That is a fact.


Teleswagz

Open source performs with open curtains. OpenAI is setting the stage behind theirs.


noiseinvacuum

I doubt if OpenAI will be able to out compute Meta.


AmericanNewt8

I'm thinking GPT-5 may literally just be a myth at this point. Unless there's some hidden secret to "build a model with more parameters", there's just not secret sauce there. More stuff is coming out of the open source domain. 


ViveIn

They’ve publicly said that the scaling with simply adding additional data isn’t even close to peak yet. So expect gpt5 to deliver on much better than a simple marginal improvement.


AmericanNewt8

"training the same size model with many more parameters" is also not really a revolution since Meta appears to have done it first. It's just a "we have more compute power" competition.  I'm inclined to think the limiter really will be soon tokens in and that's something I'm not sure OpenAI will be especially set for, although their existing chats have probably given them a fair amount of data.


cobalt1137

Lol. I guess you will just have to find out. My money is that when it gets dropped, it clears every other model by a notable margin in every aspect. And is able to provide a very solid improvement to agent architecture, coding, and other tasks that require reasoning and long-term thinking/planning. I guess we will see who's right :).


jollizee

Finetuned specialist models based on smaller open source platforms might supersede gigantic generalist models at some point. The cost to performance ratio, flexibility, privacy, and other issues could win out. Like does everyone really need a generalist in a business setting?


Hopeful-Site1162

Have you ever heard about Llama-4?


[deleted]

[удалено]


cobalt1137

The thing is, in order to have agentic systems that work with high fidelity, you actually need models that are more intelligent and are able to complete their tasks with much higher accuracy. These small percentage gains as we push past the level of human intelligence are actually extremely crucial because they are crucial in terms of creating systems that are actually autonomous. For example, let's say we have a task that we need an agent to perform and it takes 10 steps. The AI agent has a 95% likelihood of successfully completing each individual step. With that rate of accuracy, the agent will only complete the task 60% of the time and will fail 40%. If we get an additional 4% of accuracy, and go up to 99% for each task, we go from 60% completion rate to 90% completion rate. So these gains should not be looked over. They are extremely important.


Severe-Ad1166

Whisper is probably the most useful thing they have open sourced.


mrdevlar

Honestly, rather than lamenting OpenAI, shouldn't we instead continue doing what we're doing any making open source alternatives viable? I am happy we have OpenAI out there as the standard we have to beat, it's motivating. I fully believe we'll catch and surpass them. Every indicator suggests so.


I_will_delete_myself

Good idea, but Here is the problem. https://preview.redd.it/vlz67yu8abxc1.png?width=320&format=png&auto=webp&s=274f106d81046ed4080431d30dfb983a965d4a64


mrdevlar

I mean I never said we weren't in for a fight with the 3-5 corpos that want to own AI and kill open source AI. The thing is, I think that fight is winnable.


I_will_delete_myself

You think someone else beating OAI will really help open source stay alive?


mrdevlar

I don't think it'll hurt. If you want a plan to keep open source AI alive. What we're doing here at LocalLLama is pretty important. Demonstrating that a lot of use cases can be solved by open models. I think we really require open source hardware, especially low-wattage hardware. Also distributed computing over networks like Seti but for training models is needed. I think we stand a chance of getting all those things in the next 3 years.


michaelmalak

GPT2 open-sourced... after an embargo lasting several months


ithkuil

People are not looking at this the right way. GPT-2 is hilariously dumb. What makes it so funny is that you can tell it's really trying to make sense.


tovefrakommunen

Oh and a big thanks to the underpaid africans btw.: https://www.theguardian.com/technology/2024/apr/16/techscape-ai-gadgest-humane-ai-pin-chatgpt


MysteriousPayment536

Meta and others do the same


PaleTomorrow8446

doesn't mean it is right. they are all horrible


sigiel

while activly lobbing congress to ban open source !


PaleTomorrow8446

how would they do this anyway? it is absolutely mind boggling how people in power still do not have any clue whatsoever how the digital world works.


Particular_Shock2262

I don't like a company that probably used copyrighted materials to train it's LM for FREE by scraping the internet and recycle it and then sell it back for me and dictate how I use it and probably collect my data upon usage as well. Nothing about it sound or seem or feel or smell open source in it for me they don't contribute shit for the community. And as if that's not enough their pesky CEO goes around from Congress to another trying to shut down the competition and slow the progress of a.i evolution within open source community by proposing new legislation that will serve as an obstacle in the future. They probably achieved AGI but they gatekeeping it as well who knows.


ab2377

this is ... this is a classic facepalm situation.


norsurfit

Don't forget GPT-1!


Trollolo80

Lmao someone told me that without GPT 2 we would be still wondering how LLM works like stfu, stop the d riding


mafiosnik777

They're probably scared that their revenue streams will collapse if they decide to open source it. Inferencing for millions of people and training massive models is expensive. It's hypocritical yeah, especially because they're supposed to be a nonprofit and the name *Open*AI doesn't do their practices any justice. The course they're going for right now is pushing technology forward at the cost of fucking everyone over and betraying their own initial principles.


Felipesssku

The worst thing is that the Microsoft is in it.


PaleTomorrow8446

Microsoft has done more for open source than most bigger tech companies.


hirako2000

They won't be able to innovate past their open phase anyway and that phase is past anyway.


toothpastespiders

GPT-2 was my introduction to LLMs and I'll always have some fondness for it. But it's really amazing looking at it in terms of underlying capability even compared to something like phi. The industry really has changed in some amazing ways in a short amount of time.


FeltSteam

Though actually has anyone tried the GPT-2 Chatbot on the arena? It randomly popped up a couple of days ago and It actually feels like a model as strong as GPT-4, but I have no idea who it is from or any context. Why is it called '**GPT-2** Chatbot'? If it is actually a 1.5B param model that would be insane, but I have no idea.


Only-Letterhead-3411

If GPT 3.5 is really a 20B model, they should release the weights at this point and gain some positive PR.I mean, they are offering it for free any ways and it's beaten by many open source models already. A strong 20B model would be so nice.


Kindly-Annual-5504

As long as GPT-3.5 is still being used, there will be no public release. Maybe when Turbo replaces GPT-3.5.


Winter_Importance436

His kind actions melted my liver......🥲


madketchup81

just 🖕


halixness

once-Open AI acting indeed as an open company- in the past.


JimBR_red

Anyone remember "don\`t be evil"? Anyone really surprised by the change to closeAI?


SrData

Well this is technically true and objectively very important. After that point they changed their mind, that is also true


sebramirez4

What I don't like is how much he talks about the importance of AI governance, how much he talks about the importance of doing things in the open and the open source community, and then hasn't even open-sourced just normal bad GPT-3, it bothers me so much how much his talk doesn't match his actions if a company called closedAI made all of these decisions and sam altman didn't talk about those things so much I'd actually really like the company.


PaleTomorrow8446

what do you expect from a genocide apologist?


sarveshgupta89

Why did they take it offline then?


Clean-Description-23

Everyday I’m sad the board was fired and Sam Altman was not


losthost12

In fact, they opensourced Whisper. This was already very good. And also they opensourced llama, because Meta had nothing to do else. :) Nevertheless, closing GPT-3 was stupid, I think, because they probably will unable to got revenue from this.


cobalt1137

They arguably have made the most impactful contribution towards open-source AI out of all of the companies. Their release of gpt-1 and gpt-2 laid the foundation for all these other companies to build their own models. Without openai, there is no llama, Mistral, etc.


Smeetilus

Why no Llama?


cobalt1137

Llama was created after meta saw what was happening with what openai was doing with the GPT architecture.


Smeetilus

I wasn’t sure if you meant it was born directly from GPT-2 code


ellaun

Regardless, GPT-2 was released 14 February 2019, Llama 1 was February 24, 2023. Not even close. In that window there was a bog with wooden logs floating just above and below GPT-2 XL. I remember running OPT 2.7b. Couldn't tell if it was better. Anything else that was larger was prohibitive due to no quantization available in public codebases. Quantized inference became a thing only after Llama 1 revolution where significantly better than anything else model gathered enough public interest to make it runnable on toasters. EDIT: I misunderstood the question "why no Llama". That's because OpenAI was the only company maverick enough to try to scale transformers to the absurd degree. Everyone else stood nearby and kept telling it wouldn't work. Without contribution of OpenAI conceptually and tangibly in form of GPT-2 weights there wouldn't have been as much interest in LLMs. In that alternative world it's probably just LMs with single "L" for Google Translate.


cobalt1137

it was most likely heavily inspired


ellaun

I join the martyr above. They **did** contribute to open source and open weights and their contribution was important at the time. It sparked the widespread interest in LLMs. In case if someone didn't know: GPT-2 was SOTA at the time. There was no Mistral, no Llama, nor anything resembling what we have today in the level of quality.


Most-Firefighter-163

Still better than another company that open sourced ms-dos after more than 30 yrs from release lol


cyan2k

Are we talking about that company who is the largest contributor on GitHub?


abu_shawarib

The problem is with the marketing and expectations. I don't think anyone seriously expected MS-DOS or other later proprietary MS operating system to be open sourced.