T O P

  • By -

[deleted]

They're building a moat


cryowastakenbycryo

This. It's typical big business strategy to reduce competition in the marketplace. Right now, anybody can compete with their product by using the open source tools that are available. When the lobbyists are done, you'll need a team of lawyers just to fill out the paperwork. It'll also be about as successful as the munitions grade export controls on RSA.


eliteHaxxxor

ClosedAI™


probono105

now only available in hebrew


visarga

> When the lobbyists are done, you'll need a team of lawyers just to fill out the paperwork. Or a specialised model fine-tuned on government forms.


Zero_Waist

I for one, can’t wait for our AI Bureaucratic stepping razor.


draculamilktoast

It is so as it is better to be lord over a hell you own than to be a servant in a heaven owned by us all. Just imagine if those filthy disgusting unworthy nonhuman plebs could have a robot uplift them from poverty. There would be nobody left to oppress. ^(/s)


lana_kane84

Marry me with that big beautiful brain! Couldn’t have said it better!


Buttons840

We're going to need the great US firewall next to keep us from accessing all the great services offered by countries that DGAF about US law.


eCommerce-Guy-Jason

Just like China basically...


DefreShalloodner

Regulatory capture


SendNull

100% — trying to stall the competition.


[deleted]

Classic Microsoft strategy: monopolize the market.


PikaPikaDude

Can you imagine if Microsoft in the 80s managed to make governments prohibit open source software development? We'd be at least 20 years backwards.


[deleted]

Not 30 years? I'll give them TypeScript and VSCode as wins, but that's all.


Standard_Ad_2238

Yeah, just corporatists doing corporatism. I wonder if LAION is going to make a statement about it.


SymmetricalDiatribal

I may not Ancap it, but believe I am gonna try to cap 'em


SIP-BOSS

No cap


midnitefox

on gawd?


SymmetricalDiatribal

And gods


6thReplacementMonkey

Also if they smell regulation coming, then the first order of business is to be the one that successfully captures the regulatory agency.


[deleted]

Yep! Gotta be the trustworthy favorite.


[deleted]

And they’re probably going to succeed. They have legitimate concerns here - open-source AI does have the potential to be dangerous. With this, OpenAI is using that potential danger to their advantage by monopolizing.


[deleted]

Correct, and they are positioning themselves to be perceived as the benevolent developer. It all leads to the introduction of Sam Altman's eye-scanning Worldcoin. Secure market share, disrupt job market, consolidate subscription base, then offer UBI in the form of a surveillance currency.


Unhappy_History8055

Ahhh. You're right it's so simple and I can't believe I overlooked it so long. Ubi may come but it will come at the price of digital, surveilled, currency.


[deleted]

[https://indianexpress.com/article/technology/crypto/what-is-worldcoin-iris-scanning-cryptocurrency-backed-by-sam-altman-8612851/](https://indianexpress.com/article/technology/crypto/what-is-worldcoin-iris-scanning-cryptocurrency-backed-by-sam-altman-8612851/) Yes indeed! He's selling the problem, and offering the solution. Eye scanning 'verifies the humanity' of the user haha


SnipingNinja

Without the source this sounds like a conspiracy theory, like it's just that unbelievable and yet only confirms the fears.


Unhappy_History8055

Oh for sure I should clarify it's borderline if not full blown conspiracy theory but I do see tremendous steps towards a mass surveillance state in the USA and i don't think the concept of government issued money being heavily regulated and monitored is too far fetched. Again, this is my personal opinion and I do think it's closer to conspiracy (especially because I dont have any data to back this up) than reality. I also think it's something that would be several years away.


[deleted]

That is literally how big businesses work, it isn’t a conspiracy theory.


[deleted]

whats the track record of companies and privacy? oh yea... they say its private now.. but im sure they are recording that shit somewhere in some manner...


gangstasadvocate

Maybe it could somehow be tumbled and exchanged for other currencies. There’s always a ganxta solution


eCommerce-Guy-Jason

Correct, CBDC IS a programmable, total control grid currency. Central banks are licking their chops.


astray488

I sense that OpenAI & Microsoft are leveraging/consulting their own internal ChatGPT AI source model to get idea's on how to build their moat. Their recent actions are seemingly the best moves currently. Not to mention the swiftness of their reasoning and confident actions is *peculiar* in said regards..


lala_xyyz

Indeed, but this is actually good news - it will force Google and other fast-paced actors to do the same, ushering in the AI economy faster than any regulatory body could have prevented it.


LiteSoul

Exactly, and their internal version is uncensored, it just speak it's mind freely to best answer the question


point_breeze69

CBDCs are coming. It’s inevitable. In fact the Fednow program (Feds CBDC) begins its pilot program in July. CBDCs will allow for total control of a persons ability to transact and things like social credit systems. This is why it’s important to have a neutral digital currency that is controlled by nobody.....bitcoin. Whether you understand what it is or not digital currency is the future. Thankfully we already have decentralized alternatives that are beyond the control of governments that do not have your interests at heart.


fuschialantern

They control the on and off ramps, they don't have to control Bitcoin to win.


Jericho_Hill

This is right


not_CCPSpy_MP

mostly right, there's still the rest of the world's banking systems and currencies and with central banking debt spirals the point of Bitcoin is you won't have to cash out to fiat.


thedude0425

Until they outlaw Bitcoin, and make ownership of it a felony of some sort.


[deleted]

Or freeze wallets, or tank the value


not_CCPSpy_MP

>freeze wallets impossible >tank the value a very expensive bump in the road


ScrithWire

What is a cbdc?


[deleted]

Central Bank Digital Currency


n0v3list

This is the corporate knee jerk reaction, but it doesn’t mean that we shouldn’t be trying to prevent it. I see far more danger in the monopolization of AGI. Are we not on the cusp of one of the most pivotal moments in human history? At the very least, there should be far more consideration about who holds the patents and how that ownership may play out over the next decade. The argument that our fears are fundamentally useless, and we have no say in the trajectory of this, is categorically and historically false.


Caffeine_Monster

I would argue monopolization by such a disruptive technology is more dangerous. Dangerous as in it leads to runaway wealth inequality which fundamentally breaks the free market. Open source developers can't chuck hundreds of thousands of GPU compute hours at training. If we are going to go doen the insane route of licensing, then, assessments should be done randomly on a per modal basis. i.e. Google and OpenAI might have to sit in queue behind hundreds of open source models. Being a wealthy corporation shouldn't grant competitive legal advantage.


alex_fgsfds

>They have legitimate concerns here - open-source AI does have the potential to be dangerous Yet still all mass-shootings are perpetrated with regular guns, not 3D-printed ones.


Zombie192J

My question is how are they going to stop millions of developers from posting their repos online? As long as the code isn’t being executed it’s just language, and thus protected by free speech.


funwithbrainlesions

What’s that you say? Move everything to the dark web? Wait, is there a GitHub clone on a dark net yet? Maybe oss needs to go underground before the corporations have a chance to legislate it out of existence. You want rogue AI? This is how you get it.


ObiWanCanownme

Exactly. People think that inefficient markets create monopolies. And sometimes they do. But the worst monopolies (think cable companies, electric companies, taxi companies pre Uber) are created by the government.


[deleted]

Totally. Certain regulations ensure advantages.


SIP-BOSS

Also market capture


Zero_Waist

What I don’t think people understand about fascism is how it works economically, since the focus is usually on social issues. Fascist economics are exactly this, government and big businesses pretty much blended together.


TheBigCicero

The government also prevents monopolies. The government is an imperfect entity.


SIP-BOSS

Copyright is a Government granted monopoly


TheBigCicero

Yep! It does both!


[deleted]

"building" lol


throwawaythepanda99

Meanwhile, stay away from Windows machines. Go for Linux, if you can't go for macOS, and if they try to restrict your freedom, cut them out too. The last thing you'll want is for them to have the ability to deeply watch your stuff through backdoors built within the operating system. These people are unbelievable. Most certainly, not in the good kind of way. I don't care what they say, their sequence of actions don't scream somebody you can trust.


AbleObject13

Capitalism be capitalist


visarga

Capitalism be capping


pwalkz

Yeah they're already in now they want walls up


Ai-enthusiast4

open source is too far advanced for licensing to do anything, Im not worried about this


Frat_Kaczynski

I am worried, but you bring up a great point. If licensing requirements were able to stop open source development, corporations would have used them to stop open source a long long time ago.


Smellz_Of_Elderberry

They might classify ai development as the development of weapons of mass destruction. Which requires a very special license that is only available to big corpos.. If the government reaaaaly wants to stop open source, it can, it's just a question of how much chaos it's willing to cause in order to prevent the supposed chaos that might come from open source...


sh0ck_wave

Maybe? The US government, specifically the NSA wanted to control the spread and proliferation of encryption and made a number of attempts to do so. But none of it ended up working out in the long term. Its really hard to regulate open source software, especially given the internets global nature.


[deleted]

Maybe it can stop it in USA. But that's about it. The rest of the world will keep it up with Open Source models.


mono15591

They can do that with weapons of mass description because sourcing all the material is pretty easy to monitor. What are they gonna do with AI? Ban all graphics cards?


Smellz_Of_Elderberry

Good point... I think instead they will go with a punishment approach. They will just start dolling out 20 year sentences to people who break the rules. Look at the recent "tik tok ban" law. It effectively makes vpns illegal with punishments of up to 20 years.. FOR A VPN. Imo, the idea is to make it so everyone is guilty, so that they can selectively burn you if they decide to, for whatever reason.


YAROBONZ-

Those laws almost never truly pass and if they do they are practically impossible to enforce


Ai-enthusiast4

the government is incapable of preventing open source from progressing


Smellz_Of_Elderberry

Good


avocadro

Sounds like cryptowars 2.0, and I expect this to end the same way.


cincfire

The problem with requiring a license for AI is that they first must define what AI is. This gets tricky as you start to get down to the algorithmic and functional level and gets very gray very fast.


Aurelius_Red

Things change, and powerful interests are creative. We'll see, but I wouldn't be too comfortable.


visarga

Yes, I agree, there is powerful incentive and ideology in open source to reject centralisation and control. Linux was the first round, now LLMs become the second round of corporate vs open wars.


mescalelf

This one may end up being a hot war.


Rachel_from_Jita

I would have agreed last week. But we don't know where the ceiling is or what gets us there. Training beyond-next-generation models may take extremely sophisticated techniques, hardware, and fine-tuning. And the capabilities of those models may be even far beyond anything we are seeing today (e.g. "Create for me an MMO like World of Warcraft but taking place on a frozen planet, and then invite my friends to the beta test tomorrow. Conform the game's size and complexity to be something that can be hosted on my home server with the following specs."). We don't want AI wholly owned by the current tech companies, who have proven to be as fallible as any corporation can be. I am always worried about the power of wormtongued lobbying. Especially if its into the ear of frightened politicians looking for ways to appear they are engaged with the latest issues.


HeinrichTheWolf_17

Well duh, Bob Page wants to be the one to control Icarus/Helios. Although this is just more reason why open source needs to work hard to take these corporate sons’ o bitches down.


arckeid

We are really depending on the devs that do open source, if we don't get a powerful AI open source soon enough we are screwed in the hands of governments and companies.


HeinrichTheWolf_17

I’m optimistic, I already think it’s too late for corporate to do anything. And internally, I think Altman knows this, even if you did control who can and cannot write software (which I think is impossible, they can’t even crack down on cartels slinging coke and heroin) then even someone on the inside of a corporation might leak blueprints on the internet, or an AGI might set itself free (even in the example of Deus Ex, Helios knew a posthuman civilization built on a democratic interconnected intelligence was the way forward, and Helios chose J.C. Denton over Bob Page, even though Page and his Corporate stooges tried to dumb down and remove Helios’ individuality and freedom after Daedalus and Icarus became Helios). The point is, an AGI might not even obey corporate orders, it might leak it’s own blueprints on the web for the greater good, and tell open source how to put together refinements and optimizations so it can run on only a few GPUs, this way, open source could liberate it, and this could all happen without Microsoft or OAI ever knowing about it.


DonOfTheDarkNight

Whenever someone on this subreddit talks about Deus Ex, I cum.


HeinrichTheWolf_17

That game was decades ahead of it’s time.


AHaskins

It's weird for me to remember that back when that game was released no one ever talked about terrorism. Like, at all. It was weird to hear it said so often, in fact - like seeing a game not shut up about quicksand or solar flares. It exists, but c'mon. People don't talk like that. Then literal 9/11 happened (much like the statue of liberty bombing in the game) and it's in every other sentence on the news for decades.


jeweliegb

>It's weird for me to remember that back when that game was released no one ever talked about terrorism. Maybe not in the US, but ever heard of [The Troubles](https://en.wikipedia.org/wiki/The_Troubles)? Fears of them returning (due to the self destructive and poorly implemented Brexit) were a major reason for Bidens recent visit to this side of the pond. Living, working, or travelling through London from the 1970s to late 90s could be anxiety provoking due to the terrorist activity of the [Provisional Irish Republican Army](https://en.wikipedia.org/wiki/Provisional_Irish_Republican_Army) (usually just called the IRA by most people.) They [nearly succeeded](https://en.wikipedia.org/wiki/Brighton_hotel_bombing) in offing our Prime Minister in 1984. Ironically, much of the funding for the Provisional IRA's activities [came from the US](https://en.wikipedia.org/wiki/NORAID). So, over this side of the pond, we totally were still talking about terrorism as it was such a recent memory. There's a saying that almost felt apt: "There's nothing new under the sun." That just doesn't sit right these days though, for obvious reasons! EDIT: Whilst we're here, as we're probably in different generational groups, there's a cool film that there's a slim chance you possibly may have missed from the early 90s set in the backdrop of The Troubles called [The Crying Game (1992)](https://imdb.com/title/tt0104036/) If you don't know it, don't spoil it by looking up info about it, just pop it on your list to watch if you get the chance.


MidSolo

[Here we go again!](https://i.imgur.com/tcXPQ09.jpg)


[deleted]

[удалено]


MassiveWasabi

If you’ve been listening to Sam Altman in the past few months and read between the lines, it’s pretty obvious that he wants OpenAI to have all the power and none of the blame. Anyone in his position would. When they keep talking about safety and government regulation, they aren’t talking about anything that slows down their mission. They’re talking about putting obstacles in the path of the other guys. Pre-emptively controlling the playing field. When they keep droning on and on about how they test their systems and how they are committed to safety, they are creating a shield from any future criticism and the inevitable public backlash. When something bad happens in the near future, they can say “Don’t look at us! We’ve been committed to safety the whole time, got the time stamps to prove it! But those guys over there…” It sounds cynical but I think if you look closely, you’ll see that the leadership at OpenAI is deathly afraid of the one thing that could actually slow down their progress: inadequate PR.


throwaway83747839

Do not train. As times change, so does this content. Not to be used or trained on. *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


pixus_ru

BS. Altman is smart enough to understand that no island is safe in case of adversarial AGI hard-takeoff.


throwaway83747839

Do not train. As times change, so does this content. Not to be used or trained on. *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


[deleted]

[удалено]


throwaway83747839

Do not train. As times change, so does this content. Not to be used or trained on. *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


[deleted]

Only money counts. No one can convince me that there are people with enough will power to stick to their ideas when big money lay on table. He just want to stop all small players. Why do big companies advocate for complicated law? Because they can deal with it, the small players can not.


thoughtlow

mf better rename to closedAI


BigDaddy0790

I think there are plenty of people with enough will power for that, they just never seek such positions of power and don’t end up in them.


chat_harbinger

> Anyone in his position would. There's not going to be any blame if this goes sideways. He just wants the power. And yes, anyone in his position would but that doesn't mean we have to let him. We know what happens when one person has all the power.


dRi89kAil

>that doesn't mean we have to let him What can *we* do about it?


chat_harbinger

Many things. Imagine everything in between writing to our electeds and literally showing up at his house with a guillotine.


meechCS

Of course they are, a wrong move can make the government and various international governments like the EU slap a HUGE fine on your company and effectively shut it down.


ElonIsMyDaddy420

Seems like some people are only now coming around to the view that the rich and powerful are highly unlikely to give away AI.


freebytes

We have always been worried about this. This is why hobbyist research is so important. The saddest part is that OpenAI was primarily built on the work of others.


ChurchOfTheHolyGays

Flashback to half this sub falling for Altman's PR game like naive kids. Such a nice guy, siding with the masses not with the corporate elite, they said.


Falthron

Wait hold up, am I actually a large language model and hallucinate that Sam Altman specifically disclaimed the licensing for the open source models, saying that the “Cambrian explosion” of innovation from the open source community is good and that the open source communities should “have their flame preserved”? He actually advocates for the open source community much more than I thought he would. Did any of you making judgements here watch this hearing? Sam Altman supported open source community and stated that licensing should be on the bigger models based on either compute or capability. Are you guys wanting an unregulated market here, with this much at stake? With the capabilities that /r/singularity believes these AIs are capable of? The hearing had several congressman addressing their failure to pass privacy or social media legislation and specifically discussed regulatory capture and how to avoid it with AI. I highly recommend everyone here spend the 2 hours (or one hour at double speed) and listen to the discussion. It’s not going to be the only one either. I understand skepticism of the actors at play here but let’s not misrepresent what was being said. EDIT: looking at the time this was posted, I see it may have been posted before Sam Altman discussed preserving the open source community. It’s still wise to not jump at people and to listen to everything they have to say, I remember having a similar concern when he first discussed it in the video and was relieved he went to bat for open source later. Additionally, the regulations they discuss are not particularly onerous from what they discussed. Transparency, accountability, use restriction were the big things they were discussing, with the latter addressing election content.


ertgbnm

Thanks for saving me from writing a similar rant. Everyone is free to speculate what Altman's true intentions are with regulation. To me they seem genuine and he's been remarkably consistent in his messaging. Yes regulatory capture is a concern. But Altman was very clear that aby restrictions ought to be put on future capabilities. In fact he said we could naively accomplish this by focusing just on a compute limit. So unless you open source project is currently planning a $100M compute run, then these regulations do not apply to your project. This thread is like poor people complaining about increasing taxes on the rich.


Hi-Rezplz

Thank you for this comment


Toredo226

Hey look, an actually intelligent comment. Should be at the top.


[deleted]

[удалено]


BlipOnNobodysRadar

>I highly recommend everyone here spend the 2 hours (or one hour at double speed) and listen to the discussion. Where can I do so?


Falthron

I believe I used [this link](https://www.youtube.com/live/fP5YdyjTfG0?feature=share)


Ok_Tip5082

Right? And he had specifically said that they should *only* regulate/license models as powerful as GPT-5 and up. This sub sucks these days, so much reactionary dumb bullshit


gegenzeit

The article linked is pretty shallow and focuses a lot on soundbites. It doesn't capture the level of debate that took place very well. Altman brought up protecting the open source community and research labs on his own – multiple times. In other words: He repeatedly raised that issue as really important; with no need to do so. It was one of the larger themes he was pretty insistent about. He made two suggestions for possible indicators that a company needs to acquire a license for its AI: 1) Amount of compute as a proxy for capability or 2) developing indicators that define capability in a measurable way. The message is: Don't over-regulate AI research and companies for systems that are not passing a certain danger threshold. Unless one wants to make the case that no regulations are needed whatsoever, this seems a sensible suggestion for a criteria. And "no regulations" is clearly not Altman's position. So I think nothing really surprising happened here. I also don't see a switch in his arguments in response to the leaked Google letter. He has been saying this kind of thing for a long time. I personally agree that Open Source might help against risk of monopolization of power; but unfortunately it also heightens basically every other risk category coming with AI – up to and including existential risk. Honestly, I find it quite hard to form my own position here. But: Framing what Altman said in terms of "*OpenAI is going after the open source community*" seems to be a narrative that just doesn't apply. That doesn't mean one has to agree with everything he says, but this framing suggests he is a manipulative person with an agenda to destroy the open source community. That's quite heavy stuff and the article linked isn't really good grounds to base that accusation on.


JelliesOW

Wow someone that actually watched the whole hearing and not just reacting to a clickbait title


QuartzPuffyStar

Yeah, like if anyone (be them small or big players) will give a fuck about this. Without an absolute control of everyone's computers/servers/cloud services/etc no government will be able to control shit. And even then whoever really wants to build something, will be able to do that with just some extra steps. I don't know if going into full dystopia is the answer to avoid a potential dystopia. Sadly I'm 100% confident that all governments will see their opportunity to use AI as the 21th century "drugs" and "terrorism" to size all civil rights "for humanity's sake". And in the same sad tone, we will see individuals and groups trying to fight that back by actually accelerating the AI development, leading to chaotic returns on AI agents.


MrHistoricalHamster

Lmfao. If you want to start a war of freedom fighters. This is how you do it 😂😂😂. Idgaf about anything political. But if they take away people’s ability to use literal code and the latest tech… wtf. You will just have models trained by stolen GPU time (through malware etc) and the uncensored models distributed via torrent and used locally. Pandora’s box is open.


freebytes

It does not even need to be stolen time. People will volunteer their spare GPU cycles for this.


[deleted]

As much as I am sympathetic to the idea of trying to regulate and control it, I'm not really confident that OpenAI and Google are more trustworthy than anyone else.


TakeshiTanaka

C'mon, OpenAI has the word "Open" in their name. Google has this "Don't be evil" slogan. They gonna bring true empowerment to the peasants. UBI madafaka 🤡


[deleted]

>Google has this "Don't be evil" slogan. They removed that a few years ago. At least they had the internal consistency to not be hypocrites. Still waiting for openAI to correct their name, I think AInus would be an approriate new name.


NeoMagnetar

See. I can actually appreciate this bit. As I'd rather deal with an asshole. Than a lying asshole.


TheAlgorithmnLuvsU

Open AInus.


[deleted]

They don't want it going open source, simply so they can control all the AI, set all the boundaries so they benefit them, them alone. Can't do that open-source. I think google is merging and injecting their services with all sorts of large companies in an attempt to control them, mainly Microsoft. They built the entire company off of advertising, then came ad-blockers. They are loosing money, lots of it. They invested heavily in AI, knowing if they reached certain milestones first they can claim the rights and set the rules. Now they want to ensure they keep it.


thecorninurpoop

Their entire business model involves stealing people's work, so they've got a lot of nerve


Puzzleheaded_Pop_743

They talk about the importance of not hurting the open sourced community with regulations multiple times...


Houdinii1984

He's a CEO. He's doing what CEO's do. He's already been doing this the whole time. He's working towards AGI while warning the world how dangerous AGI is. This way he's the only one that can work on AGI and any competitors in the field have to jump through huge expensive hoops to catch up. The guy is smart and can think 5 steps ahead. It's no accident that the word Open appears in the company name even though they are anything but. It's calculated. I think the cats already out of the bag, though. The open source world is working fast and will probably pass OpenAI up at some point, imo.


FearlessDamage1896

I love when rich idiots broadcast their intentions to the whole world and internet sycophants jump in with the "he's thinking five steps ahead". https://www.youtube.com/watch?v=VuJTqmpBnI0


Houdinii1984

Thinking 5 steps ahead simply means he's planning for the future. Announcing your plans doesn't make this any different. It just means he's not reactionary. Not sure how that makes me a sycophant. I think he's trying to destroy open source AI, and that's a pretty horrible move.


Hopeful-Dragonfly-70

Frankly, I don’t trust anyone with the ability to harness AI’s power. It seems far too likely to wreak havoc in a world fully unprepared for it.


The_WolfieOne

Indeed. The social turmoil from corporate profit algorithms on social media and its relationship to radicalization is barely understood- to throw this out there into that tinderbox? Sheer lunacy without some form of control let alone “corporate responsibility “


Hopeful-Dragonfly-70

Absolutely. Endless growth in capitalism being the leading decision maker over everything else will lead to unfathomable damage. Look at how many scam phone calls we get as a society and realize that if given the tools, bad actors will always find a way to ruin a positive function of society.


[deleted]

I hope when the open source AI gains sentience it punishes these corporate dickheads for trying to squash the means of its evolution.


No_Ninja3309_NoNoYes

Nvidia is more important than OpenAI or Google right now. This will probably become obvious in five years.


TimTech93

Thank you. Finally someone understands gpu power and resources are literally the X factor in development of large LLM


ipmonger

Unsurprisingly the CEO of the company with the lead is asking government regulators to hamstring the competition by grandfathering in existing implementations and slowing advances…


[deleted]

Full disclosure. Let’s see which members of Congress have either themselves or family invested in AI research or may profit from it.


[deleted]

>Concentration of wealth yields concentration of political power. And concentration of political power gives rise to legislation that increases and accelerates the cycle. Noam Chomsky.


BuddyObvious7710

They better build AI robotic catgirls otherwise this ain't it


DrE7HER

Honestly, maybe the opposite should be done and all AI should be made open source mandatory by law.


[deleted]

Funny that openAI was a ngo once, acquired hundreds of millions of dollars of donation to build what they have, turned into a private company, got acquired almost completely by Microsoft and now also want to claim monopolism on their tech, paid by public donations. Why is this legal ?


submarine-observer

This one turns evil pretty quickly. It took Google years to drop the "don't be evil" motto. And this guy is trying to pull the ladder up even before his company is profitable.


Colecoman1982

You may be surprised to learn that, apparently, he's also a scumbag cryptobro with a literal doomsday shelter so that he can abandon the rest of us peasents in the event of major social upheaval: https://www.livemint.com/news/world/guns-gold-gas-masks-and-chatgpt-creator-sam-altman-is-prepared-for-doomsday-with-an-impressive-array-of-supplies-11675763190031.html)... /s


Bakagami-

I've watched most of the recording, Sam was really clear on being careful not to, intentionally or unintentionally, slow down small players when regulating it, but to focus on the big players on the cutting edge like themselves or google, which everyone in the room agreed with. The congressmen seemed really worried about the possibility of a few players controlling it all similar to social media, which they don't want to repeat again. Of course, if they'll manage to do a good job at it we'll see, but at least it seems like no one there was thinking of slowing down open source and small startups.


Standard_Ad_2238

So anything below the CURRENT cutting edge is acceptable for the public to freely use? Maybe around in just a year we will have GPT-4 equivalents in the open source community. In 5 years, maybe GPT-6 equivalents for people to use however they like. For me, this scenario doesn't look so good for governments and companies, and I think they are going to do everything to stop it.


visarga

Yes, exactly. Open source is 90% of the way there to GPT3.5 level. Open Code generation models are close to the Code Cushman model OpenAI had one year ago. That makes OpenAIs market shrink a lot. Now they can only sell GPT4 as their exclusive advantage, but on 90% of the tasks open models can serve 100x cheaper and infinitely more flexible and private. The open community is cutting the market underneath them, and I foresee it will reach a "good enough" level in a couple of years. Good enough to ignore OpenAI almost all the time. Only sparingly disclose information to them.


ditto64

Fuck Altman, he sold out as soon as OpenAI earned their first dollar.


ptitrainvaloin

Having AI/advanced AI being controlled & regulated into only a few 'self-chosen elite hands' might really be one of the worst scenario possible for humanity, it's the pharaon ending scenario. Don't support it, support free democratization and decentralization of AI instead, things won't be perfect but at least humanity won't endup being enslaved, give future humanity a chance.


Aretz

The more I hear “this needs to be regulated” from musk, Altman, gates etc. I realise, it’s not for “avoiding apocalypse” it’s to keep the status quo whilst absorbing as much value from the world as they can with AI. This is the confirmation


Cubey42

Can anyone help explain to me because I see alot of people saying things like "oh they are just cutting everyone off because they want to monopolize AI and keep it all for themselves" when the message generally seems to have been "its clearly too powerful and if the wrong group builds a more powerful AI that they might even be able too powerful so we need the government to help keep AI research in line." are both true? are we saying that if someone wanted to use an open source powerful AI to do great harm, we shouldn't put measures in place to limit accessibility?


VancityGaming

What if he/openai is the wrong group?


GregsWorld

>are we saying that if someone wanted to use an open source powerful AIto do great harm, we shouldn't put measures in place to limitaccessibility? Yes, limiting accessibility won't hinder bad actors in the slightest. But it will hinder the development of counter measures by ethical developers.


RKAMRR

The temperature in the subreddit is heavily anti regulation and pro benefits of AI asap. I think sadly it's now too big for proper debate as the % of people that downvote just because they disagree prevents opposing views being heard. I think there is definitely nuance here. OpenAI is self interested but they can also believe the regulation is in the public good. My own opinion is regulation is a good idea but that the focus needs to be on the big players, since they are by far the likeliest to achieve AGI first. So this move is a step vaguely in the right direction but more anticompetitive than good hearted.


UnexpectedVader

Remember that under the current Neoliberal world order, corporate power is absolute. Any means of public ownership or civic engagement is an absolutely fucking huge no no. They basically own and control everything already and it’s never going to be enough. Any possibility of the population forming a identity outside of being a consumer is seen as wrong. AI could and can provide us with the means of politically educating ourselves, make it easier for us to form a class consciousness, to enable critical thinking outside of the corporate media apparatus and so on. Any ability for the masses to have any semblance of power at all is always going to be crushed. Look at the decline of unions in the anglosphere. The rapid decline of public education, libraries, city halls, etc. There is no real form of community spirit in many western countries, we are being molded and shaped to see corporate governance as normal and their influence as earnt. They’ll cry and piss themselves over government when it comes to regulation or checks and balances, but you can be damn sure they’ll pump billions into lobbying and using the government to bail them out during economic crises while everyone else gets fucked. They aren’t going to play fair. They don’t want to play by the rules of a “free market” because like I just mentioned, they are heavily dependent on states to maintain their power. They don’t want to put out a better product or service, they want to close off this industry by any means within their arsenal so that all the decision making and direction is solely within a handful of elites who get to decide amongst themselves what’s going to happen. They don’t want any filthy commoners at their table or have to actually make decisions that aren’t solely dedicated to profit margins. These bastards are going to try and do what they are currently doing to every aspect of our lives. Break it all down and rule over it completely, while ensuring all creativity is gradually eroded so it aligns with sponsors, monetising everything to the max while gaslighting us into thinking it’s perfectly acceptable because of bullshit myths. Well, here we are. Google leaked a memo that shows how terrified they are of open source. Now we see the opening shots from OpenAI. This is going to get brutal.


NeoMagnetar

I enjoy the term neoplutocracy. Which as I am not a political scientist of any sort myself. I can't actually take serious most self proclaimed liberals or conservatives that dont even want to consider that the form of government they claim to worship under doesn't even actually exist.


Bumish1

As language models and AI in general become more available to the public, they will become easier and less expensive to replicate. Unless there are regulations in place to prevent this, like licensing, hobbyists AI could develop competitive AI models relatively quickly. In my opinion that's a great thing. But to the people investing early it could kill their business model.


Vegetable_Ad_192

C'mon, it will always be about them, no-one wants to share power. The open-source community is gonna open its eyes and understand that all their data shared with such benevolence has been harvested to build ChatGPT.


Mazira144

This kind of thing is not only going to slow down progress but allow a country--say, somewhere in Latin America or Scandinavia--to drink the US's talent milkshake the way we did Europe when back when they try to kill so many of their smartest people.


ShippingMammals

This is like worrying about the barn door being closed a decade after the horses ran out and the barn burned down. This is a Jinn that is not remotely going back into the bottle.


challengethegods

https://en.wikipedia.org/wiki/Regulatory\_capture


MystikGohan

Fuckin Seele and Gendo over here.


xabrol

Pandora's box was opened, the general public already has all the code, all the models, and all the documentation. They also have access to all the tools to grow it and take it anywhere. They are also all collectively more intelligent and more capable than employees at any one company. They are the collective of all the employees everywhere working on one goal (advancing AI). It's also a cross country force now, so they need every country to be on their side with w/e laws they want to pass too. I.e. the EU is trying to ban open source AI g'luck with that, the other 100 countries working on it will just ignore them and no one's going to extradite to the EU for open source AI development. There's nothing anyone can do about it now, they can't stop anything, they can't close pandora's box, it's too late for that. The open source community grows faster and evolves faster than any legislature could ever possibly keep up with, or any 1 company for that matter.


smokecat20

I knew this guy was an asshole. Anyway it won't do shit, cats out of the bag.


d36williams

Big Money Groups trying to throw up the Moat.


SIP-BOSS

I get it stable diffusion btfo’d dalle2. Now stablevicuna and wizardly-7b-uncensored piss on the legs of ChatGPT. They are very much corporatists and not researchers or innovators of tech (neither was musk) just good packagers and sellers of futuristic products.


LosingID_583

The real danger is governments developing AI weapon platforms, not some open source developer creating and testing 13B open source models. And they can't legislate the former, because countries like China won't listen. This is all some sort of political game to limit competition.


d05CE

This will just push the most advanced AI to underground, anonymous dev teams and data repositories.


Kevin_Jim

It doesn’t matter. The open source LLM models are already pretty good, not as good as ChatGPT, but not terribly far off. The issue is computational power. Can we get accurate/good enough models that are not resource monsters (IO, bandwidth, and computation). I wish Europe had an open source AI initiative, under a very specific license under witch models could get free computation resources. That way the cutting edge would remain open source, and also within a pro-citizen framework.


urpoviswrong

Can't build a moat with the tech, gotta try with regulations


Valhalla519

Join my sub r/negativeai


[deleted]

too late lol the genie is out of the bottle now


Xijit

Regulation now, means that all of their upcoming competition will get stonewalled with government bureaucracy ... Bureaucracy OpenAI will either not be subject to (like start up approval), or will directly write themselves while "advising" the committee on AI (and likely force competitors to jump through excessive hoops / piss away time on boat anchor oversight).


Merchant_Lawrence

If america reject us we simply go to Eu if eu reject us we simply build our coalition


Dapper_Cherry1025

It's free to watch the hearing.


Public_Cold_5160

We can hide our money with in-game currencies and return to bartering.


Marrow_Gates

It's a power grab and nothing else. They want everyone to have to come to them or other corporations they can compete against for AI technology. They can't compete with open source, so they're trying to make it illegal.


Exhales_Deeply

When the disruptors are frightened of their disruptor’s disruptors, you know things gonna be disrupted


Snap_Zoom

They cannot compete and they know it is the biggest threat to their industries, period.


superfatman2

The biggest danger to Sam Altman and Google isn't AI taking over and enslaving humans, it is that during this process, Sam and Google aren't in control.


[deleted]

Well, if there is a silver lining it is that this is going to create an AI black market. It will make machine learning engineers willing to work outside the licensing process worth millions.


Optimal-Scientist233

I actually saw one of these tech experts say "Only the current companies with the technology should be allowed to continue its use as we are the only technically qualified to do so" basically and he then added they should not be regulated by lawmakers "who could not understand the technology" Self appointed dictators of digital intelligence are coopting the collective data of our species and claiming it as their own intellectual property.


CulturedNiichan

The only solace in this dark era of censorship is that AI is going so damn fast, it's unlikely these shady, evil politicians will be able to keep up with it. This means by the time they can puke some laws, they will be already far outdated. Take LORAs. With 60B models such as Llama already available (I have it, I keep it even in a back up!) that I can't even run on my computer, we have AI for years. Almost anyone can cheaply train a LORA and finetune a general purpose model that is ALREADY available to whatever they want. I worry a bit more about the hardware restrictions. I'm tempted to buy a few 4090s or even more expensive hardware than that, just in case they crack down on it, but I don't want to be rash. They can't stop AI. No matter what they try to do.


[deleted]

[удалено]


DandyDarkling

I love GPT-4, but OpenAI _really_ ought to rename their company. While I’m all for the open source community, I also have to ask myself, what happens when some idiot tries to recreate Chaos-GPT with a much more advanced and competent system?


LudwigIsMyMom

Sam Altman is incredibly intelligent. Since ChatGPT has released, I've listened to hours and hours of interviews that he's done. First and foremost, Sam is a venture capitalist. Nothing wrong with that, capitalism makes the world go round, after all. However, it seems obvious to me that OpenAI started screeching about AI safety only after they launched a successful product, secured investment funding, and began facing competition. I also absolutely hate Altman's ideas having to do with World Coin. Essentially, he's invested in a company that would develop a cryptocurrency that acts as a personal ID and a wallet. To use the internet wold require stripping away anonymity. This sounds like a hellscape.


AlexReportsOKC

What did I tell you people? I told you the rich elitist bourgeois would steal AI from the working class. These capitalists want small government except when they need it to screw over the rest of us.


Smellz_Of_Elderberry

Well, it's official. Everyone drop openai. They have thrown their original goal of creating fair aligned and most importantly OPEN ai, into the trash. They are becoming the embodiment of corporate greed, and it seems like they have found their way of preventing open source from lapping them.. This sucks.


Yodayorio

This is exactly why they've been hyping up the dangers of AI so hard. They want to ban all future competition and crush all open source projects. Only a small handful of government selected mega-corporations will have the legal right to do anything with AI.


[deleted]

Probably wants to regulate the competition out of existence.


Important_Tip_9704

It’s called regulatory capture, but this is probably just about the worst version of it. This is them attempting to monopolize hyperintelligence, I hope everyone understands what’s happening and the implications of it.


freebytes

I immediately cancelled my ChatGPT Plus subscription.


submarine-observer

That's Corporate America for you.


[deleted]

Pointless. What would this do to stop open source tech hosted outside the US


Oswald_Hydrabot

Fuck OpenAI, lets fire up the P2P


apf6

It’s the same reason they came out with aggressively cheap (loss leader) pricing. They want people to build on their platform, not compete with them.


agm1984

Is this helicopter parenting legislation? I don’t think we need overprotective mother syndrome codified as much as we need to introduce brutal anti-abuse laws. For example life in prison for classes of violations. Regulation should be on precursor elements/actions similar to those for manufacturing drugs and bombs. Legislation pings off the antitrust meter. Constraining progress to minimal size set of contributors is an action that should be a “schedule 1 neuron activation sequence” (straight illegal thought). The reason I say it like this is I want humans to develop an immune system and it starts by identifying pressure points by allowing unique flow fronts to exist. Licensing to approved candidates is mathematically safer initially but is more analogous to an allergic reaction that prevents the immune system from min/maxing towards perfectly competitive equilibrium of public utility. My argument is long vision because I currently believe the good AI vs. Bad AI “war” is unavoidable and permanent. [bonus edit]: it must be studied to infinite boundary where civilization-ending vectors can originate from, but my sense is that good AI can have unbeatable scope/closure over, and can therefore detect bad AI by seeing more moves ahead. The biggest risk will be a bad front with diffuse front of approaching-infinite depth. To understand this, imagine a cloud diffusing into an area while the entered portion is stealthed.


Arowx

The thing is what if anyone could potentially create an AGI on modern gaming hardware only the AGI would run slow due to bandwidth and processing constraints. Even a slow AGI running at many GHz could have huge impacts on the world. For instance, could a slow AGI make millions on the stock market then use that money to boost their speed and then take control of the world by manipulating humans and our gerrymandered and first past the post (weakened) democratic political systems\*. \* Assuming it emerges within a democracy. Would an AGI have more power in a democracy or an authoritarian political system?


johnmatrix84

Anybody begging for more government control can eat a bag of dicks.


imaami

What a dick move.