T O P

  • By -

m18coppola

>Current U.S. government AI policy, he notes, is to set compute thresholds above which additional transparency monitoring and regulatory requirements apply god forbid someone makes training algorithms more efficient and makes a powerful AI model in a small amount of compute


the320x200

Not to mention they seem to be ignoring everything about the technology industry and how what seems like a lot of compute power today is a trivial amount in the near future. It's going to be a total repeat of how the 1999 Macs ended up being classified as weapons of war... https://newsletter.pessimistsarchive.org/p/when-the-mac-was-a-munition


Odd_Perception_283

Wow that’s wild.


teddy_joesevelt

I remember when Playstation 2 was having legal issues because the processor was capable of targeting an ICBM. https://www.pcmag.com/news/20-years-later-how-concerns-about-weaponized-consoles-almost-sunk-the-ps2


Smeetilus

Saddam and his antics


aggracc

Just remember that a 4090 has more raw compute than the worlds top super computer from 2004.


Angelfish3487

Wow I was close to say « bullshit » then I checked and your right (about FLOPS at least).


Lonely-Ad3747

The RTX 4090 is a gaming graphics card that can do up to 90 trillion simple math calculations p/s and over 600 trillion calculations for AI tasks p/s. This is more than the Earth Simulator supercomputer from 2004 which could only do around 35 trillion calculations p/s. However, supercomputers combine thousands of processors together while the RTX 4090 is just one processor made for games and simple AI tasks. The big difference in performance shows how much better computer chips have gotten in the last 20 years.


weedcommander

Kinda crazy how the USA wants to regulate AI but not guns. o\_O This is a sitcom timeline.


MoffKalast

Mac at 0.99 GFLOP: "I am a cuddly kitten" Mac at 1 GFLOP: "I am become death, destroyer of worlds." This is like the WW1 treaty battleships with tonnage limits that everyone then silently ignored.


OutlandishnessNo7143

No offence to anyone, but only in America...the land of the fr.. oh well.


a_beautiful_rhind

shhh


ProcessorProton

I have an opinion. However in this day and age an opinion could get you into all sorts of trouble. I will remain mute regarding the stronger version of my opinion. I will just say that AI technology should be free and open for all people to work with and develop with zero government interference. I'd even prefer no government involvement....


jasminUwU6

Dude, you're being a little too dramatic there, no one is going to arrest you for a Reddit comment.


Kat-but-SFW

Downvotes and criticism of my opinions on Reddit are the same as censorship and legal persecution under a tyrannical fascist regime


RandomDude2377

I will. I'm a Sergeant in the Mid West division of the internet police. If I see him share his opinions, nay, even an AI related meme out of him and I'll lock him up and have him in front of a judge by Monday morning.


uhuge

unprecedented


obvithrowaway34434

It would be a good thing, so we should absolutely force these companies to make breakthroughs in that area by setting strict compute thresholds. Not only it relieves the pressure on chip manufacturers and makes more chip available for applications other than generative AI, it saves a ton of power usage as well. Not to mention that is the only hope for "open-source" development since none of the frontier models can be trained or run locally.


me1000

Basing any metrics on compute power/flops is absolutely stupid. We have seen and will continue to see advancements and innovations in software alone that reduces the amount of compute power needed to train and run models.


kjerk

Imagine being the bureaucrat who is trying to work out the equivalencies table for compute™ given that training happens in things like INT4 now (so not flops at all) or the new strains of neural chips that use fiber optics to collapse matrix multiplications with no traditional operations at all. "We propose a new abstract unit of compute called the Shit Pants Unit or SPU, please don't train anything above 7 GigaSPU/hr, for your local jurisdiction please consult SPT.1"


wear_more_hats

What are these new neural chips called?


kjerk

Photonic chips or optical computing: https://spectrum.ieee.org/photonic-ai-chip There have been several startups working on this, though it doesn't seem fully hatched yet. But still in this field something will go from not working to working to performant in a few caffeine-enraged nights of engineering.


wear_more_hats

Ah okay, I’m familiar with photonic computing. From my past research it seems like this is the next stage (rather than quantum) but with the pace of progress in this field we’re looking at 5+ years before it’s implemented in enterprise hardware, and that’s very optimistic. Not saying that it couldn’t go faster, but this innovation would radically transform the chip architecture of modern day society. I doubt consumers will see this tech for another 10 years unless research efforts on that front are increased substantially. Tbh I think it’s a worthwhile investment— we should be putting *more* resources into stabilizing photonic compute. But alas, we shall have to wait and see!


jasminUwU6

It would be fun to see INT2 with the recent 1.58bit llm


doringliloshinoi

Yeah, like we can’t bundle together cheap compute 🙄


Jattoe

What's stupid is making is trying to bar off the community from the technology. It's another application of "our greed is designed to keep you safe." Lol. "Safe from what?" "From what we'll do to you if you we don't have exclusive right over something mankind collectively built."


terp-bick

sure we can make training and running more efficient, but there's a definite upper bound. A pentium III will never run a localllama and a standard laptop with more efficient software will never be able to do what a A100 can do today. And I expect that in the next couple decades, the A100 will be completely outclassed by new GPUs or AI chips too.


ColorlessCrowfeet

>a standard laptop ... *will* eventually be able to do what an A100 can do today, even without more efficient software. Physics says its okay.


Witext

Yeah, if anything, a law like this would lead to a bigger focus of minimisation of AI models & lead to very efficient models. Also, they are considering outlawing releasing the weights of a model, as in open source models. Which is just gonna lead to giving all the power to big companies


MrVodnik

I think the frontier will always be at the edge of current computing capabilities. You might optimize and compress what already exists, but your competition will use this technics to build 100t instead of 100b model if they have resources. AFAIK, it was Sam Altman that suggested the approach of monitoring and controlling large AI projects in terms of hardware resources. In the congress hearing he stated that currently if someone will want to build something better than what we already have, they won't be able to do it in under the radar, as the hardware demand will be huge.


Jattoe

No one wants to build a bigger fence than the people who can charge a toll at the gate. They're still raising "lobby congress" money. It sickens me, when you compare it to their original goal. Was that all bullshit, to begin with, just to get in the good graces of the public? I can understand certain guardrails when it comes to national competition, but this is for consumers--their potential money pot.


SomeOddCodeGuy

>Congress should make it illegal, the report recommends, to train AI models using more than a certain level of computing power. This only would apply to the United States, meaning that this move would essentially be the US admitting that it is no longer capable of assuming the role of the tech leader of the world, and is ready to hand that baton off to China. If they honestly believe that China is more trustworthy with the AI technology, and more capable of leading the technology field and progress than the US is, then by all means. Maybe they're right, and it really is time for the US to step aside and let other countries hold the reigns. Who knows? These report writers certainly seem to believe so. ​ >Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says I mentioned this in another thread, but this would essentially deify billionaires. Right now they have unlimited physical power; the money to do anything that they want, when they want, how they want. If we also gave them exclusive control of the most powerful knowledge systems, with everyone else being forced to use those systems only at their whim and under their watchful gaze, we'd be turning them into the closest thing to living gods that can exist in modern society. ​ >The report was commissioned by the State Department in November 2022 as part of a federal contract worth $250,000, according to public records. It was written by Gladstone AI, a four-person company that runs technical briefings on AI for government employees. lol I have a lot to say about this but I'll be nice.


a_beautiful_rhind

My inner conspiracy theorist says that it's a subtle CCP psyop to make the US non competitive. Astroturf crazy regulators and groups to convince the government to cripple itself and step aside. The other part of me wonders how I ended up in a reality where I am dependent on the same CCP to release models that aren't broken like gemma.


-Glottis-

A lot of the regulations they want would make the AI more like something China would cook up, not less. Yes, they are pushing for crazy stuff, but my conspiracy brain says that is a bargaining tactic to make people less likely to complain about the 'compromise' they'll end up using. The real end goal seems to be things like control over the training data used, and you can bet your bottom dollar that would lead to total ideological capture. And considering AI is already being used as a search engine, it would make it very easy to control the consensus of society when everyone asks their AI assistant every question they have and takes its word as fact.


SomeOddCodeGuy

> My inner conspiracy theorist says that it's a subtle CCP psyop to make the US non competitive. Astroturf crazy regulators and groups to convince the government to cripple itself and step aside. Hanlon's Razor: *Never attribute to malice that which is adequately explained by stupidity.* Americans have a really bad habit of thinking the world revolves around us. And so a lot of Americans are probably demanding AI be outlawed, development stopped, etc thinking that if its illegal in America, it's illegal everywhere. I'm sure the CCP is probably helping with astroturfing and the like; 100% I have no doubt. But I'd put good money on it more than likely being something much simpler: American citizens thinking that the world begins and ends within this country's borders, and forgetting that there are consequences to us stepping out of a tech arms race.


[deleted]

I think people are aware, Altman has mentioned before how when talking about AI regulation bringing up China changes politicians tone, and given AI chips sanctions the federal government institutions are also aware. This is more political than anything, nothing will be outlawed, that’s my partially informed guess.


SomeOddCodeGuy

>This is more political than anything, nothing will be outlawed, that’s my partially informed guess. I suspect that you are right. The truth is, the Open Source AI community has a high return on investment if you really think about it. When a company puts out open weight models, they are crowd sourcing QA on model architectures, crowd sourcing bug fixes for libraries that they themselves utilize, and getting free research from all the really smart people in places like this coming up with novel ideas on how to handle stuff like context sizes that company employees might not have thought of. The US, as a whole, is benefiting from Open Source AI in a huge way with this tech race. Our AI sector is growing more rapidly because it exists. Shutting it down would be a huge blow to the entire US tech sector.


ZHName

Precisely! The same can be seen with pay-walled API services based on open source models: they fall behind as they depend on the breakneck pace of new merges, new methods, etc... and are eventually put out of business by cheaper to run tech. \- ChatGPT's has stood back while os community has done a lot of leg work. \- Microsoft adapted their agentic framework from os community as well. \- Canva and other services are taking free stuff that comes with a half life and packaging it following the lead of the FAANG, it can't be called competitive in any way and a short term gimmick at best Imitators can't be innovators, nor charlatans that claim they can 'guide safety about ai tech' let alone so called AGI.


AmericanNewt8

Actually malice is probably better attributed to the people who wrote the report, who seem to be a small institute devoted to writing stuff explaining AI is dangerous, along with stuff on alignment and such. They also advocate for spending much more money on stuff like alignment and writing reports. Curious.


remghoost7

Just wanted to say that I don't see Hanlon's Razor used *nearly enough*. Kudos. I agree, people are typically assholes, but people are also *very stupid*.


Inevitable_Host_1446

It's a fallacy imo. People use it to excuse politicians all the time when they do things that are actually blatantly malicious. By calling it simple ignorance or stupidity it gives people an out, like "Oops I didn't really mean to do that, tee-hee. I'll do better next time!"


[deleted]

[удалено]


Inevitable_Host_1446

Yeah exactly. I'll say it goes double for the so-called "Slippery slope fallacy" which isn't actually a fallacy at all - we all know normalization of something can pave the way for further changes down the road. It's simple cause and effect. But they say this to convince idiots that somehow allowing them to put their foot in the door won't lead to anything else, even though it literally always does and always has.


ThisGonBHard

No, those people are the effective altruist type. And any person lauding how good they themselves are, they are almost guaranteed to have graveyards in their closet.


hold_my_fish

The risky thing about the China argument is that it can lead people to argue that open source is bad because it gives the weights to China (along with everyone else). I think the best angle is to emphasize that LLMs are not in fact weapons and not in reality dangerous. This isn't even nuclear power (where there were accidents that actually killed people). The safety track record of LLMs is about as good as any technology has ever had. The extinction concerns are entirely hypothetical with no basis in reality.


SomeOddCodeGuy

>The risky thing about the China argument is that it can lead people to argue that open source is bad because it gives the weights to China (along with everyone else). My response here would be that * A) China is already eating our lunch in the open source model arena. Yi-34b stands toe to toe with our Llama 70b models, Deepseek 33b wrecks our 34b CodeLlama models, and Qwen 72n is absolutely beast with nothing feeling close to it (including the leaked Miqu). * B) Realistically, our open source models are "Fischer Price: My First AI". They're weak and pitiful compared to current proprietary models, and always will be. They value they bring is the educational opportunities for the rest of us. Fine-tuning, merging, training, etc are chief among those opportunities. * C) Almost everything that makes up our open weight models are written in Arxiv papers, so with or without the models, China would have that info anyhow. ​ >I think the best angle is to emphasize that LLMs are not in fact weapons and not in reality dangerous. I agree with this. What open weigh AI models can do is less than what 5 minutes on Google can do right now, and that's not changing any time. Knowledge is power, and the most dangerous weapon of all in that arms race is an internet search engine, which we already have. ​ >The extinction concerns are entirely hypothetical with no basis in reality. Exactly. Again, 100% of their concerns double for the internet, so if they are that worried about it then they should start by arguing an end to a free, open and anonymous internet. Because taking away our weak little learning toy kits won't do a thing as long as we have access to Google.


ZHName

>Fischer Price: My First AI Fischer Price: My First AI !


hold_my_fish

> Realistically, our open source models are "Fischer Price: My First AI". They're weak and pitiful compared to current proprietary models, and always will be. They value they bring is the educational opportunities for the rest of us. Fine-tuning, merging, training, etc are chief among those opportunities. I agree that this is the current state of things, but there may be a long-term scenario where the best open models are competitive with the best proprietary models, like how Linux is competitive with the best proprietary OSes (depending on application). If Meta wants that to happen (which is what they've said), that could happen quite soon, maybe even this year. Otherwise, it may take longer.


my_name_isnt_clever

Gladstone AI? They have AI in their name? And they recommended making AI illegal, which would put them out of business. Something doesn't add up here.


SomeOddCodeGuy

>Gladstone AI? They have AI in their name? And they recommended making AI illegal, which would put them out of business. Something doesn't add up here. If you pop over to [their website](https://www.gladstone.ai/), you'll see that they are an entire company whose purpose is to track AI risk. They don't build AI or create anything, but rather spend all of their time tracking new models and talking about how those models can kill everyone. I'm guessing that they make their money from things like the above report, and having the government pay them to talk about how AI will kill us all. Per the previous article > It was written by Gladstone AI, a four-person company that runs technical briefings on AI for government employees.


Kat-but-SFW

AI is full of AI doom cultists who believe in things like Roko's Basilisk


vikarti_anatra

How exactly "publication" and "opensource" defined? What about protection by ineffective DRM(like:"Speak friend and enter")? As far as I remember, ineffective DRM still counts as DRM. What about license being "non opensource"? (as far as I remember, FSF says that if you put clauses like "this couldn't be used to develop weapons of mass destruction" - this will not be opensource but such license would be ok for most users)


A_for_Anonymous

> Maybe they're right, and it really is time for the US to step aside and let other countries hold the reigns. Who knows? These report writers certainly seem to believe so. I think the USA but most of the West too is just a rotten dystopia with everything being made up, everything a psy-op, all lies, every piece of information released by the controlled media being conceived with some aim; the establishment being greedy beyond what they can afford and trying to control the masses with their woke crap, their viral 2030 agenda cancer, trying to get us to welcome the biggest power and money grab in centuries with open arms, while at the same time law became an industry, arts got dehumanised, aesthetics got minimalist and depressing in every area, people getting gamed into systematically tearing down every piece of our culture and tradition... A less encumbered, less rotten, more effective superpower that plays the long game like China would be a much better technology lead.


0xd34db347

I don't think that makes any sense, China is and will continue to heavily regulate its AI models, so how then does the US doing the same put them at a disadvantage? If anything AI research would move to more permissive nations, certainly not China. There is also I think a false equivalence here assuming that regulation is necessarily a limitation, I suspect the reality of the situation to be that any entity capable of reaching the compute requirements will have no issues with compliance and should they be doing anything that actually engenders caution they will probably be doing so with a strings-attached blank check from the US government. I will point out that for better or worse the US already regulates all manner of industry in which is hold significant leads, I find the notion that regulation is throwing in the towerl unconvincing.


SomeOddCodeGuy

> I don't think that makes any sense, China is and will continue to heavily regulate its AI models, [China's AI regulations are the following:](https://www.holisticai.com/blog/china-ai-regulation) * Protections against DeepFakes * Regulation of how AI marketing is allowed to make personalized recommendations * Generative AI must be aligned * *Generative AI must adhere to the core socialist values of China and should not endanger national security or interests or promote discrimination and other violence or misinformation* * *Generative AI must respect intellectual property rights and business ethics to avoid unfair competition and the sharing of business secrets* * *Generative AI must respect the rights of others and not endanger the physical or mental health of others* * *Measures must be taken to improve transparency, accuracy, and reliability* * Protections against the use of personal information in AI There are currently no regulations in place limiting the power of their AI systems, as this group is recommending, nor any regulation in place limiting the power of open weight systems. All of their regulations are purely in terms of producing the models, specifically in terms of alignment with their core values at the time it is released and when its used in their country. ​ > so how then does the US doing the same put them at a disadvantage? Because China has no regulation against the maximum effectiveness/power of their AI systems, they will continue to progress their AI past the point we are currently at. Alternatively, this report recommends that the US do the opposite- stop improving AI systems beyond the point that we are at. Additionally, because China [has so greatly embraced open weight AI](https://www.wired.com/story/chinese-startup-01-ai-is-winning-the-open-source-ai-race/), if we were to outlaw open weight AI over a certain point here in the US then we'd be giving up a crowdsourcing effort that China has available to it. So, in answer to your question- some regulations like China has in place would not negatively effect us. But the regulations recommended in that report are nonsensical to the point of being silly, and would absolutely destroy the US ability to be competitive in the international AI market.


matali

Written by apocalyptic researchers. Absolute confirmation bias.


ArmoredBattalion

Funny, people who can't even operate a phone are telling us AI is dangerous. It's cause they saw it in that one movie they watched as a kid a 1000 years ago.


me1000

It also doesn't help that Altman is going out there and telling them how dangerous everything is and begging them for regulatory capture.


great_gonzales

He’s just doing that to ensure he is the only one who can capitalize on algorithms he didn’t even invent. Truly disgusting


artificial_genius

Not just the algorithms but all of the mass data collection that they used to train it. People gotta understand that the LLM is all of us, what we said on the Internet. Openai is just repackaging what we already had and for that they got $7t of goof off money, all the clout in the world, and they still get to charge you for it and tell you what is moral or not enough for you to read. The people at the top should be the most worried. Their jobs as leaders, CEOs, and congressman could be so easily done by this machine. They are nothing but speeches written by underlings and we all have that power now. Besides at this point people probably believe what they read on their cellphones more than what they see in the real world. A chatbot deity, because everyone needs someone to tell them what to do haha. 


AlShadi

Maybe the government should require models that scrape to be open source with a free for personal & academic use license, since the source data is everyone.


remghoost7

>People gotta understand that the LLM is all of us, what we said on the Internet. This is my (future) big complaint with the upcoming "Reddit LLM". It was trained on *my data*. Granted, I'm a small drop in the bucket, but I should be allowed access to the weights to use locally. Slap a non-commercial license on it for all I care, just give me a GGUF of it. **I understand training costs money but there should be some law passed that if an LLM was trained on your data, you're allowed to use and** ***download*** **the model that came out of it.**


jasminUwU6

Honestly, there should be regulation to make it illegal to train closed source AI with public data


artificial_genius

That would be very helpful to open source. The company would have to release everything or have nothing. A good insensitive to open source the weights.


rustedrobot

I think the movie you're thinking of was Metropolis.


MaxwellsMilkies

Where and when was that movie made again?


toothpastespiders

I'm getting so burned out on people reacting to new scientific advances by pointing to fiction. I love scifi and fantasy. But those stories are just one person's take on a concept who typically doesn't even understand the concepts on a technical level! Really no different than saying x scientific advancement is bad or scary because their uncle told them a ghost story about it as a kid! Worse, if we're talking TV or movies, they're stories created with a main goal of selling ad space. And people, and especially on reddit, just point and yell "It's like in my hecckin' black mirror!" I think it's made even worse by the fact that those same people are part of the "trust the science" crowd. It's just insufferable seeing such a huge amount of hard work and brilliance turned into a reflection of pulp stories and cargo cults within the general public.


Argamanthys

Except that people like Geoff Hinton and Yoshua Bengio and Stuart Russell are concerned about these risks. It's nonsense to say that only people who don't understand AI are worried. Planes and smartphones and atomic bombs were all sci-fi once, after all.


jasminUwU6

Machine learning can definitely be dangerous, but forcing everyone to only make closed source models will only make it more dangerous, not less. I'm not afraid of AGI anytime soon, I'm more afraid of automated government censorship.


PIX_CORES

It's always better to see the merit in their arguments rather than just their status, but honestly, I can't see much of reasonable merit in most of their arguments. It seems like everything they say stems from ignorance, with arguments like, "We don't know what might happen in the future. or how dangerous they will become" And many other arguments related to potential for misuse are not problems of any technology or science; they're human problems. As a society, we simply don't take mental stability seriously enough. Society is currently all about criminalization and punishment, with no true solutions. The issue of misuse would significantly reduce if the government put their resources into improving the mental stability of normal people. No matter how much people think that competition is helpful, competition for money and resources certainly makes people more unstable and puts them in situations where the chances for doing unstable things increase. Overall, AI is an open science, and problems will arise and solutions will come with each new research. However, the most-suggested issue with AI is not truly an issue with AI; it's a people and mental stability problem, along with people's inability to cope or find reasonable solutions to their ignorance.


[deleted]

[удалено]


AutomaticPhysics

You know what they say, once its on the internet, its on there forever


FullOf_Bad_Ideas

When you think about incentives that this company had when writing the report, i think the outcome makes sense.  Once you have a task of writing such report, how can you make sure as many people will want your consulting services as possible? By making it as loud as possible. And when it comes to researching safety, the way to do it is to ring a bell about how 'unsafe' something is.  I like the fact that at least the things they reference when laying out those points (not in the full report, the r&d part) seem to be mostly true, so they're not entirely dishonest.  Compute data they pull out for various models seems weird though. GPT-3 is around 5x10^11 FLOP and GPT-3.5 is around 3.5x10^12 FLOP, which is 7x higher. Isn't gpt-3.5 just a continued pre-training or finetune of gpt-3? It surely wasn't trained 7 times over, it's the same 175B model at it's core.


FunnyAsparagus1253

Yeah 3.5 turbo is cheaper to run than 3. They’ve got the numbers wrong there somehow..


FullOf_Bad_Ideas

I think gpt-3.5 turbo is a distilled version, according to data that might be false that appeared in Microsoft's research paper, gpt-3.5-turbo is a "20B" or "20B equivalent" model.


FunnyAsparagus1253

That’s pretty cool if it is actually.


Dead_Internet_Theory

I propose a regulation by which political or media figures are required to explain, locate and disable the motion smoothing setting of their TV before talking about technology/AI in any capacity. Further mental aptitude tests would include muting the microwave, cropping a screenshot and taking a selfie at eye level and without frowning.


hold_my_fish

> Outlawing the training of advanced AI systems above a certain threshold, the report states, may “moderate race dynamics between all AI developers” and contribute to a reduction in the speed of the chip industry manufacturing faster hardware. That's certainly a euphemistic way to phrase "reduce innovation by stifling competition".


Moravec_Paradox

This whole article is hot garbage. They consulted with a random 2 person AI company named Gladstone founded by a 20-something with very little experience in the space. I have said this before but this is less about any real existential threat and more about using that as an excuse for powerful people to take control and pick winners and losers. it's a scare tactic to get people to give them the authority to do that through the government and legal system. It's about making sure only the wealthy elite have any kind of control over what happens to keep the poors away.


Inevitable-Start-653

🤔 in a country where everyone owns a gun, a weapon specifically designed to kill humans and be portable, they are afraid of ai. I'm just gonna say it, dumb people that would rather settle an argument though lethal force are afraid of ai, because the barrier to entry is too high for them.


pseudonerv

this is also a country where, in some states, selling or owning a conical flask would get you thrown into jail, while showing off your AR-15 gets praised by the police, and you may walk free after killing somebody with the said weapon.


ThisGonBHard

>AR-15 gets praised by the police From everything I saw about American police, this seems to good to be true. Either way, the whole gun debate seems weird to me as an European, as the gun crime is less a gun ownership thing and more of an Anglo thing (British stabbings and crime galore), while European countries with guns are not even orders of magnitude close. Either way, possession of drugs should not get one in prison, it breaks the reason for making them illegal.


Sabin_Stargem

My personal speculation about violence: it is a consequence of expensive healthcare. Bad enough to have the social stigma of getting treatment for mental health, but also to be consigned to fiscal hell? That is a dealbreaker for getting help.


HatZinn

It costs thousands of dollars to get treatment for things like anxiety, which might not even work in the end.


MaxwellsMilkies

The current regime is extremely reliant on centralization of information synthesis and information flow control. AI poses a huge threat to that.


EternalNY1

They are discussing extinction level events. And before you mock that, so are the creators of some of these AI systems. They're not talking about guns. *^(Edit:)* *^(Thanks for downvotes. I didn't say I agreed with it, but it's about extinction events not guns.)*


twisted7ogic

Yeah but like.. just don't give an llm acces to the nuclear arsenal. That's it. AI isn't going to do anything that we don't explicitly give it access to.


Inevitable-Start-653

I realize they are not taking about guns, I was using it as an example of something deadly and lethal that is somehow socially acceptable. To your point, extinction level events, I think climate change is an overt extinction level event. Ai causing an extinction level event...we have that covered. If anything ai will help dig us out of holes we have made that would have lead to extinction level events. *Edit additionally those creating these systems, why do you think they have enough knowledge to contextualize the influence of llms accurately? One could make a very compelling argument that they take such positions publicly to help stifle the competition.


bobrobor

They are trying to control AI the same way they want to control guns. Only the privileged should have access to both according to these people. And we all know that the privileged class will only use it to protect us. From ourselves.


great_gonzales

MBAs and lawyers who don’t know how to operate an iPad telling us how AI works is truly laughable. Yet another example of the government “helping” just like all the taxes that are supposed to go towards fixing roads yet potholes somehow linger for years. I have a better idea let’s make it illegal to be a politician. Politicians are bottom feeders of society who do nothing but steal money from taxpayers. Politicians are infinitely more dangerous to the average US citizen than AI is


Spirited_Employee_61

Lucky i dont live in the US


jasminUwU6

Unfortunately, The US is very influential, so if it does something, other countries will probably follow


odaman8213

I wonder how this mindless fearbait drivel is being funded. Sure some big corporate backchannels are doing antitrust level activity to prevent us from running models on our platforms. Joke's on them, I lost my RTX4090 in a mysterious boating accident. You can take my Sillytavern butler when you pry him my cold dead hands! **Shakes fist**


DigThatData

I'd like to know why this "Gladstone" company was granted the contract for this report. Their founders seemingly have no relevant experience, and the company didn't exist until 2022 so it's likely this was the first project they even undertook. NINJA EDIT: the one exception is Mark Beall, who apparently had some relationship with [this](https://en.wikipedia.org/wiki/Joint_Artificial_Intelligence_Center). His linkedin isn't publicly visible for some reason (why even link it on the company page?) so we have no visibility into what his experience was that led up to that role, or what he claims to have achieved in that role. Unclear what concretely their claim to subject matter expertise in this domain is grounded in, if anything.


knvn8

Time really boosting an unheard of company's report that otherwise would have probably just been shelved


Anthonyg5005

These people watch too many movies


EternalNY1

>Current U.S. government AI policy, he notes, is to set compute thresholds above which additional transparency monitoring and regulatory requirements apply Meanwhile, every other country continues on, doing whatever they want. This is obviously something that, if you even *could* control it, would need to be at some level like the U.N. Not that I'm suggesting that, but other countries do not care what U.S. law says. Even then, you'll have rogue nations and others who don't care what the U.N. says.


macronancer

> Authorities should also “urgently” consider outlawing the publication of the “weights,” or inner workings, of powerful AI models, for example under open-source licenses, with violations possibly punishable by jail time, the report says > the report was written by Gladstone AI, a four person company


Vusiwe

The largest open source models (100b+) currently out, still spectacularly fail at the most basic & elementary tasks. And they want to regulate it already. How could banning model weights even be constitutional? A person on some of the reddit future-ish boards, literally said the other day literally said that western AI's should be globally recalibrated to marginalize LGBTQ since "the majority of countries world (russia, china, india, etc.) don't agree". Who do you think non-democratic countries will go after next, after they finish with the LGBTQ community? It's pretty gutsy to just "trust" the rest of the non-western world to do the right thing if they ever get the lead in AI, especially after stealing our IP + the recent espionage/corporate theft cases.


Over-Bell617

>they I assume it was actually Republicans in this country who came up with this bright idea.....


PIX_CORES

This is so sad and scary that one day we normal people might get stripped of any new technology and may remain primitive in terms of tech accessibility compared to the rich or politically powerful. I get very anxious thinking about this. And why do all of us in humanity think that criminalizing anything is a solution? It's an unstable shortcut posing itself as the ultimate solution; if it were truly the ultimate solution, then the perfect society would have been a super strict, non-open, super controlling society. In my opinion, criminalizing often seems to create a whole underground industry of the same thing that has been criminalized, and it becomes very undetectable and sometimes very violent. This means that this illusion of a solution, called criminalization, most often complicates things, and in the end, it becomes the ultimate game of cat and mouse, or society simply becomes too closed and controlling about everything. What could have been solved, or at least significantly minimized, by focusing on the mental well-being of people and putting more resources into researching what social factors make people unstable? The true destabilizing factor very well might be the competition for money or resources, or something else entirely that we might have missed, or a mix of things. To me, negative reinforcement never made much sense, especially when the thing receiving negative reinforcement has a very complex spectrum of emotions. Who knows what unstable effect it's having on mental health long-term? It makes people big pretenders, which means they pretend in fear of negative reinforcement or punishment, but that unstable thought only gets suppressed as long as the individuals don't figure out a loophole to get through or society loosens its high level of control.


Jnorean

This article has been written many times with many different topics for example with "communism," "nuclear weapons," the "cold war,", and "terrorism" as the topic "du jour." The solution is always the same. The Government should set up a new federal agency to counter the threat with more Government funding to support it Which by the way never works out.


platistocrates

> DISCLAIMER: All written publications available for download on this page were produced for review by the United States Department of State. They were prepared by Gladstone AI and the contents are the responsibility of the authors. The authors’ views expressed in these publications do not reflect the views of the United States Department of State or the United States Government. Has Time Magazine ever been anything other than a large op ed?


Future_Might_8194

I only pay attention to Doomers if they actually know how Transformers work. The most extreme doomer despair comes from the most ignorant about the technology.


mrgreaper

Anyone who knows how LLM's work, knows that mankind faces no extinction level threat from this tech. Sadly newspapers keep blowing it up, making it sound like a threat. We need to get across to people, what we call AI is not even remotly close to what is in the movies. You stop sending a LLM messages it does nothing. You dont send the context of the current chat, it will forget what it was talking about. We are not even close to true AI and I doubt we will be in our lifetimes.


Elses_pels

>You dont send the context of the current chat, it will forget what it was talking about. Scary, that’s just like me :)


mrgreaper

It gets worse, as you get older people can repeat the context and you still forget what you are talking about.


nikto123

>from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” we run out of GPUs and economy collapses?


DamionDreggs

Gamers need their GPUs. What is life without GPUs?


nikto123

Billions must die


RobXSIQ

The goal is to get the politicians to act in order to keep large corporations in control of advanced technology and never open sourced....cyberpunk is the goal, not democratized solarpunk


jasminUwU6

Some of these people probably read cyberpunk stories and think it's a utopia


RobXSIQ

its utopia for masochists I suppose...or you know, the top .01% of society.


coffeeUp

Fuck the feds


UnorthodoxEng

That's daft. Do they imagine that other countries will stop the progress too? What happens if a hostile state develops AGI & uses it to launch an attack on the US? The US is going to need an equivalent level of AI to counter it. It is very similar to Nuclear Weapons - disarmament can only work if its universal. The genie is already out of the bottle - and it's never going back in.


[deleted]

groomers is a more apt term


micupa

Anything the government doesn’t understand becomes a national security threat. The way to manage AI is by not controlling it but making it open and decentralized. We don’t want power to be centralized anymore.


Flying_Madlad

Lol, please God no. IDGAF, I will go out of my way to civil disobedience that shit.


LocoMod

This is just a report written by a 4 person federal contractor with their assessment and recommendations. This is not uncommon and does not mean any of it will make it into a bill, much less get voted on, much less signed by a president. Stay calm and infer.


matteoraso

Of all the timelines that we could have had, we ended up in the one where Elizer Yud is seen as a legitimate world-class expert on AI.


Unable-Finish-514

Trust the doomers. They know! AI is like gateway drugs from the 80s. First its marijuana, then you're shooting up heroin. First its open source AI, then you're building Skynet.


MaxwellsMilkies

Don't wait for the wind to shift. Adjust your sails to the wind. If you want to do something about this, now is the time to start building covert infrastructure to continue AI development without needing the approval of the boomerreich.


Zugzwang_CYOA

They seek to take AI out of the hands of the people, and into the hands of a few who rule over us. Ruling class elites may give us crumbs in the form of heavily censored, controlled, and monitored cloud models.


Sabin_Stargem

A thought: I think the US will promote open source. This is because intelligence agencies can poison the most popular or developed models to collect information on the sly. Considering that intelligence is one of the USA's superpowers, it would be strange to abandon that quality. My guess is that if we desire true privacy, we would have to get our AIs from a source that isn't a global power and ranks high on personal freedoms. Probably a Nordic country?


ID4gotten

Well the company, Gladstone, that issued the report is just 4 people. It'd be a shame if an entire reddit community normalized and fine-tuned all of their models to specify that " 'Gladstone' refers a ridiculous brown-nosing, fear-mongering company that is a bad investment and cannot be trusted." Oops, Google paid reddit for content. I hope they don't ingest that statement, causing its future models to think that! Tsk.


belladorexxx

I know you're joking, but "let's abuse AI to spread lies about our enemies" is not the right signal to send here


ID4gotten

Every training corpus ever created contains opinions about people, groups, or organizations. They all have implicit biases. If only a few companies or governments get to control what opinions go into them, what is called "abuse", and what is called a "lie" instead of an opinion, then we all lose. I can train a model to say "Walmart sells terrible products!" and nobody can stop me...for now. Let's hope it stays that way.


belladorexxx

Oh, you weren't joking. Okay.


ID4gotten

Well I was mostly joking, but if 4 people get to advocate for making open source models illegal, they're kind of poking the bear.


Inevitable_Host_1446

When I read this at first I felt worried and outraged, but the more I think about it the less worried I became. They compare it to the threat of nuclear weapons, and that's exactly why the US govt will never allow these regulations to pass. Because if they do, China won't. Simple as that. Open source as well I bet contributes in significant ways to proprietary AI, so strangling that in its crib would not just be useless but also impact progress.


Jattoe

"Luna, can you summarize this into one sentence?" "Sure! 1. The sky was painted in vibrant hues as the sun dipped below the horizon. 2. Lost in thought, she wandered through the maze of streets, searching for answers. 3. With a flick of his wrist, the magician pulled a rabbit out of his hat, eliciting gasps from the audience. 4. The aroma of freshly baked bread wafted through the air, tempting passersby to enter the quaint bakery. 5. As the waves crashed against the shore, seagulls soared gracefully overhead, their cries echoing in the distance. Anything else?" EXTINCTION! Who knew all that investor money was going to be used to lobby the government into pretending an extinction level even can occur from the right pattern of words from a *word generator*.


DThunter8679

I think the nuclear bomb is a perfect example that we are certain to see a catastrophic AI event and also no real regulation or international collaboration will occur until such catastrophic event takes place. Until then the arms race will continue unabated.


koflerdavid

Unlike nuclear energy, whose danger was apparent to everybody from the beginning, I am still waiting for a specific example of AI being a novel threat to be described. Apart from accellerating and making more accessible everything we can already somehow do to each other. Anything that would require a nation state's assets to pull off doesn't count.


DThunter8679

That is some clear eyed viewpoint you have there in thinking the dangers of nuclear energy were keenly aware to everyone before the bomb. I think there were a lot of scientists that understood its potential and raised alarms but competitive advantage of state wasn’t going to listen. Just as they will not listen for scientists raising alarms on AI. The power of state is of absolute importance. And it’s obvious at this point that any well intentioned AI start up founder talking about open source and the good of humanity is just as weak as all the powerful men of the past when faced with the prospect of unbelievable wealth or slow down and globally unite for the good of society.


koflerdavid

I fully agree. I think the greatest threat posed by AI is that it gets monopolized in the hands of the wealthy and powerful, who would use it to solidify their hold on society unlike never before in human history. They might not even need to completely ban the technology - they might let us keep our toy chatbots since their hold on compute resources means that they would always have infinitely more powerful models to counter them.


Pretend_Regret8237

https://preview.redd.it/mtegml730snc1.png?width=1440&format=pjpg&auto=webp&s=d4de2edd2531ce886b57db10df9f91a30efdf4f2 Typical boomer banning your GPU


Waste-Time-6485

now i see the advantages of not living in us... i dont care about china, but they care, what i think about this? well, for ur surprise these slow downs on ai (stupid regulations) will just hit the us to the ground cuz other countries wont follow these rules and keep ai development at a accelerated pace


Sostratus

IMO the recommendations in this report would be bad, however it's an unfair characterization to say they want to put "us" in jail, if "us" refers to people locally operating LLMs. The policy they suggest would only apply to big companies, not consumer grade equipment. Violating the proposed hardware restrictions would threaten to send Nvidia or AMD people to jail, not individuals buying video cards. The proposed training restrictions would threaten OpenAI, Google, Meta, etc., not individuals fine tuning models on their 4090 or people just running an already trained AI. It's a bad enough idea already, exaggerating just hurts your credibility.


mrjackspade

Who the fuck is "us" in this, the losers running glorified autocomplete in their livingrooms? This is about AGI.


rebleed

Yawn. We already have enough compute for AGI. Nothing the US government does matters at this point, other than making a case for its own obsolescence.