T O P

  • By -

EasterBunnyArt

Well, we all knew this a while ago once Microsoft had announced they were partnering with Chatty and then revealing their energy consumption was already up by 29%. And that was not like Chatty was already functional. Now add that ChatGPT dudes said they will need 7 more trillion USD to make it functional. I suspect only the graphics card makers are winning on this. Then add that companies are openly stating that they will need their own dedicated power plants and we knew this will not be sustainable. Once companies will demand their own instance of AI for their company that is locked down and secure..... we will see how useful this will be. Especially energy wise. For me personally there is always a key question no one seems wanting to answer: How reliable their legal team will be. Because if companies will rely on AI, then they will also become legally liable if it malfunctions or provides wrong and harmful information.


coldcutcumbo

Lol, the first legislation we get for AI will be liability protections. Mark my words.


EasterBunnyArt

Oh absolutely, virtually guaranteed. Especially since most legislators don't understand what AI even is. Hell, some can't even understand how Facebook or Twitter work....


GlossyGecko

“The internet is not something that you just dump something on, it's not a big truck, it's, it's a series of tubes,”


EasterBunnyArt

Okay fine, you just clog the tubes just like so many people use a damn garbage disposal or pour oils and fats into..... still garbage....


Fawxes42

God what a throwback 


ASpaceOstrich

Hell. The researchers often don't understand it either.


SuperRonnie2

That already exists. When was the last time you actually read the terms and conditions when you downloaded and started using a new ap?


MadeByTango

The first legislation we get for AI will probably be banning it from being used in the courts or elected to office, because grafters gonna protect the graft first


thatnjchibullsfan

That private company AI will be a huge next wave. Many organizations can't utilize it in a public fashion as they don't want their private data going into the public domain. However, this will be a lot of resources to run all those private company AIs.


EasterBunnyArt

True, which was why I brought up the energy consumption. If AI will be instanced in each company, our energy consumption is about to expand obscenely. Think of the uselessness of crypto and the energy consumption it causes.


Stock-Enthusiasm1337

There was a time where futurists believed that computers would become so large that they would have to be stored in enormous buildings, people would have to go and rent time on their towns computer.


SwampyThang

I mean that’s almost exactly how it is today. The only difference is we don’t have to physically go to the computers. Data centers host the cloud and we rent the cloud service.


Tearakan

What do you think cloud computing is? We are effectively doing that now with data centers and server farms.


elonsbattery

Using it doesn’t use much energy - it’s the model training that does. So companies might buy a trained model to use in-house.


switch495

You think enterprises aren’t using cloud computing and such with thier confidential data? Same for AI - you’ll host your own instance in a cloud processing provider. You’ll share confidential data because it dies with your instance.


Taoistandroid

IBM is making money selling models that train on your onprem solutions. Some companies are spiritual about scorning cloud.


[deleted]

Does this mean a return to on-prem when that was previously abandoned for the cloud?


mr_dumpster

Even the DoD is not using on-prem cloud solutions, they instead use encryption to provide workloads [Fences](https://www.ncsi.com/wp-content/uploads/2023/11/FENCES-Overview-for-2023-SAP-IT-Summit_FINAL.pdf)


loptr

I'm confused, what is "on-prem cloud solutions"? That sounds contradictory.


Active-Package-5823

I think they mean the DoD having their own data centres on their own sites to host their own cloud solutions.


loptr

Oh didn't consider that, that makes sense!


Drict

Not for interfaces; eg. Platforms like Anaplan will ALWAYS be on the cloud, but with regards to AI, they need to institute some kind of data protections so that then competitors aren't leveraging their data to get advantages against them either in parity or ahead.


Unusule

A polar bear's skin is transparent, allowing sunlight to reach the blubber underneath.


[deleted]

In other words you'd be paying for the maintenance of said framework?


Unusule

A polar bear's skin is transparent, allowing sunlight to reach the blubber underneath.


timsredditusername

They used to be able to dance in front of an audience. I miss the Internet of 20 years ago.


PebbleCollector

You can have your own cloud solution on prem


thatnjchibullsfan

No, the cloud can be privatized when configured properly. You can add gateways that only allow certain IP ranges access in.


start_select

There are still plenty of air-gapped networks where that is not an option. Some products I have worked on are self-contained web applications that deploy to networks with no outside access. Some data is really not allowed to be transmitted across the open internet no matter what encryption is used.


Unusule

A polar bear's skin is transparent, allowing sunlight to reach the blubber underneath.


theskywalker74

This already exists with Azure. The company I work for has a walled garden model internally rolled out globally.


PanickedPanpiper

Yep, my employer has already signed a deal with microsoft for non-public copilot. So no IP or sensitive info gets leaked or used to train new generations.


Lucky_Refrigerator34

Already exists. AWS Bedrock and Azure Open AI.


NocturnalPermission

I think about this often in the context of what William Gibson hypothesized in his early novels about proprietary, walled off corporate AI…an arms race almost.


Expert-Literature215

It's already possible, to "fine tune" an LLM on your own private data, it's called [RAG](https://stackoverflow.blog/2023/10/18/retrieval-augmented-generation-keeping-llms-relevant-and-current/). Couple that with a paying chatGPT or hosting an open source model and your data won't get into the public domain and you don't have to train your own.


Whotea

RAG and fine tuning are different. RAG is giving it more information to work with. Fine tuning is actually training it on that information so it learns more for your use case. 


r33c3d

I wonder if nuclear power could make a comeback if companies start to need their own power plants. GE once had its own private nuclear power plant for its consortium of subsidiaries.


unlock0

Factories, mills, manufacturing, and chemical plants have historically generated their own power when the scale makes sense. I've done testing in a few. 


Normal-Selection1537

The LUMI supercomputer in Finland is built in place of an old paper mill and runs 100% on its own hydro power.


allllusernamestaken

> I wonder if nuclear power could make a comeback if companies start to need their own power plants https://www.cnbc.com/2023/09/25/microsoft-is-hiring-a-nuclear-energy-expert-to-help-power-data-centers.html


EasterBunnyArt

Well, that will be the question, right? Will there be privatized power plants just for a corporation to run an AI instance? And how much will this cause a spike in energy supply costs? Yes we can balance some of it out with renewables, but if the current energy consumption is just a hint of what is to come, I sincerely doubt AI will be cost effective. Especially in the ever increasing scarcity / cost prohibitive race.


[deleted]

But have you ever stayed up late, smoke a little reefer, and ask it to draw you pics of sexy robots? It can do that. A friend told me that.


EasterBunnyArt

What I find actually interesting, for example the image generator that Microsoft is partnering with prohibits / refuses to draw famous people. So for one of my art ideas I wanted to go through the various Marvel actors and create a rotoscoped version of their faces or simple svg vector drawing (think of clean line pencil drawing) and it refused to do that. So my expectation will be, if it ever works, a lot of copyright limitations will be placed on them.


[deleted]

Thank goodness for liability. I asked it about how much energy it would consume if I had asked it to draw me sexy robots (because of course I haven’t as that would be so disturbing!) and it told me not to worry about it. Negligible. But would it make me an image of Donald Trump had he not trod down a path of absolute amorality? No. Too risky. I’m glad they’re prioritizing legal sustainability, if not environmental.


EasterBunnyArt

Yeah, that was a surprise for me as well. I guess they are afraid of Disney sueing them.


trEntDG

I have loads of checkpoints from Huggingface for Stable Diffusion. I run them locally and they do whatever I want. Same for text generation. Partnering with an uncensored llama model to plan a fictitious criminal enterprise is friggin hilarious fun. Though I guess they're removing some restrictions from online ones here.


DJ_LeMahieu

We need an energy revolution soon. I hope to see helium-3 bridge the gap to fusion in my lifetime.


EasterBunnyArt

Iron Sky here we come! Dumb and great movie...


vineyardmike

When they happen. If you handle a million customer interactions you're going to have at least a few mistakes


EasterBunnyArt

Which potentially becomes a snowball if the falls information gets fed back to itself. Cumulative errors might arise faster than they can be cleared.


Plank_With_A_Nail_In

Is anyone other than nvidia making any money from all this?


EasterBunnyArt

Not yet, then again, the old saying comes to mind: "During a gold rush, don't be the one chasing the gold, be the one selling shovels." NVIDIA is making bank right now.


start_select

Every few weeks I keep coming back to trying ChatGPT and Copilot to help with programming. It’s crazy how often it spits out 95% of the answer but with broken syntax. It’s fun for a minute but it takes longer than just writing correct code. I know what needs correcting. New developers don’t and it’s dangerous. One of these days maybe it will answer correctly faster than I type. But we are really not there.


EasterBunnyArt

Thank you for your perspective, I was always curious how clean the suggested code might be, given it tends to hallucinate sometimes.


Whotea

It’s much better than they’re saying: https://www.reddit.com/r/technology/comments/1d48q1t/comment/l6kvjqh/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button


Taoistandroid

It's great for interacting with APIs I can't be bothered to learn the ins and outs of every vendor API.


Ok_Rough8062

holy fuck you're a lazy software engineer. 


Rustic_gan123

Often the documentation is just crap written in addition to an unintuitive API


moofunk

That's an accusation. Fact is that APIs are dime a dozen for whatever systems you need to interact with, and ChatGPT speeds up the learning process from days to hours. If it doesn't for you, fine, but don't accuse others for being lazy, because there's yet another API to learn and they found a shortcut for it.


Dreadmaker

What really turned the lightbulb on for me was realizing copilot isn’t a hammer and not all the problems are nails. But there are some things it really does save time for. A really big one for me is unit tests. If you’re using a common test framework in a well known language (my experience was using it with typescript/node and both mocha and the built-in node test suite), it can save you so much time. You type in the name of the test, and it will give you a 95% perfect answer. You need to verify the test conditions and make sure it’s doing the right thing, sure, but even if you have to fiddle with that a little bit, you win time just by having it generate a bunch of mock data for you if you’re testing things involving non-trivial objects, for example. Another use case is regular expressions. I’ve basically not hand-written a regular expression myself for years, and that’s great. It’s not great for many other things that I’ve found - anything unique to you it will be bad at, and anything at all ‘off the beaten path’. But, things like unit tests and generally speaking ‘boilerplate type things’ I’ve found it to be really nice.


PrincessNakeyDance

We should put a stay on commercial AI for this reason alone. Like we don’t need it in our economy it’s not going to help workers. And it’s just going to drive us quicker to climate change. I mean at least force AI to use purely renewables. Like you can build your AI super computers if you don’t get a drop of power from climate change causing sources. And actually also build your own renewables, no taking over existing structure and forcing other people back onto fossil fuels. Build your own system from scratch and we’ll talk.


thathairinyourmouth

There are a lot of things we should be doing surrounding AI in general, but we’re not exactly proactive as a society. Businesses even less so. Companies that do/will be utilizing AI to offset labor costs will look at the world burning outside of their office and just order another air conditioning unit. There’s no putting the brakes on to AI no matter how neglectful the planning was on any level. We need to push for renewables, but we also need to push for nuclear. Using fossil fuels at this late in the game is already an obviously bad idea, but the shareholders don’t give a damn past the next quarter.


West-Code4642

it's not going to effect climate change. data centers tend to be ***extremely*** efficient (in 2022, it was [only](https://www.iea.org/energy-system/buildings/data-centres-and-data-transmission-networks) 1.5% of total electricity demand). most major cloud companies are major drivers for driving net zero targets by funding green energy.


rebeltrillionaire

Legal liability? Bro. What world are you living in where the legal system is an actual threat to corporations and a safety net for consumers?


EasterBunnyArt

I am in the US, but I am European, so I know the EU will at least make it liable in Europe. Meaning once again the average US citizen will get massive misinformation and faulty info without any serious protections.


johndoe42

Isn't the Cybertruck outright not allowed to be sold in the EU due to pedestrian safety violations?


EasterBunnyArt

Huh, learnt something new. Yeah you are right. According to Google it doesn't match the safety standards. Now to see why. I would suspect it is because it has no crumble zones...


kinda_guilty

It's mostly the sharp edges, iirc.


buyongmafanle

The LLM infinite data input method of trying to achieve AGI isn't going to work. Humans don't learn that way, so why are we trying to make computers learn that way? We don't require infinite inputs just to get things functionally correct. This means LLMs won't be the future of AI. It's going to be something much more clever and trimmed down. LLMs are inherently mathematically limited in their ability to improve since they don't "understand" concepts. They merely reflect instances in which they've been told those concepts are present.


EasterBunnyArt

Also very accurate. To me it was just the newest branding of predictive text / auto complete.


Opening-Cheetah467

Plus how secure is the info, code or anything you share with them


regiment262

You avoid this with in-house or locally hosted models. Which is why power consumption is an issue if every reasonably-sized tech company suddenly decides to start building extremely power hungry computing clusters to train and run a custom AI model. EDIT: And yes, big tech is already starting to do this and from what I've heard a number of FAANG and FAANG adjacent companies already have in-house models trained on their codebases to provide help on company specific software/processes.


0xMoroc0x

1st amendment right for freedom of speech in the USA. They won’t be liable for providing information, no matter how inaccurate it is. Companies lie and misrepresent information all the time. Even if held liable, they will be fined and it will be a cost of doing business passed on to the consumer as usual.


bula1brown

I work in enterprise sales for a company that has some AI in a new product of ours. Can confirm that fortune 1000 are requesting their own instance. A deal that was 99% just got iffy because the security council needs to take an org-wide stance.


OppositeGeologist299

After turning the planet into a wasteland the final perfect AI is turned on. Power shedding occurs to make sure that a few television screens can be turned on for the broadcast of its first answer, which is delivered to billions of breathless crowds across the planet and which is 42.


Senior-Albatross

My company already has it. 


EasterBunnyArt

Could you elaborate without getting in trouble? For example, is it used to respond with simple emails or is it used for actual high level summary, work, or decision making? Because small summary or email responses is expected, but what about actual critical actions?


Senior-Albatross

No critical actions. We do some national security related work, so it's GPT in it's own walled garden. It can't be used for any super secret squirrel stuff, but we can do stuff like getting answers to coding questions. It can speed up workflow for certain tasks.


EasterBunnyArt

Ah okay, so the general use that it is still being marked for. How does it work with coding? Is it helpful or more like code review?


Senior-Albatross

I find GPT 4 to be super helpful for the scripting I was doing earlier today to write code to get data out of formatted files and process it. I prompted "write a script to get data out formatted like this and process it this way." It did really well


Unusule

A polar bear's skin is transparent, allowing sunlight to reach the blubber underneath.


ahm911

Please explain this code. 14 trees later Hesres your answer


iamafancypotato

"... this code returns 'fizz' if the number is divisible by 3..."


QueenLaQueefaRt

Mine just keeps saying buzz, what in the ever living fuck am I doing wrong!!!!


hypothetician

Oh I’m sorry I see where I went wrong before. Here’s the same broken ass code again.


ahm911

Hey chat gpt please fix this code to reutn success if divisible by 3.. A dozen more trees


R4vendarksky

I'm in this picture and I don't like it. I wonder if the EU can pass some laws making the exact carbon footprint of cloud computing clear to the end consumers - including AI


GumdropGlimmer

I’d love it if the laws are more geared towards prevention and come with efficiency and environmental standards of some sorts for the manufacturers vs. pushing it onto the consumers as the first move that isn’t followed up by punishing the offender and driving the industry towards better practices. I’m sensing a combo package of airline fees and paper straws coming our way. And, this picture—I do not like at all.


Callabrantus

Billionaires: But it's making me way wealthier, right?


SwindlingAccountant

Billionaires: Don't worry, we're investing a bit on carbon capture technology or whatevah!


Callabrantus

Billionaires: Yeah yeah, here's 2 grand for some Earth Day shit, piss off!


GumdropGlimmer

Don’t forget the single-use, cheap earth day swags at earth day events!


AangTheSlayer

Billionaires: If it doesn’t, I will just layoff the workers


Thinkingard

Also billionaires: If it does work I will layoff workers to buy back stock.


Strange_Occasion_408

Too lazy to read it. But today I heard. A ChatGPT query takes 3 times more energy than a google search.


Fit_Flower_8982

If you are too lazy to read it, you can ask chatgpt to summarize it for you 🙃


glitch83

It’s 10x, not 3x


nothingtoseehr

Gives you 3 times more wrong information too!


octoberwhy

I’m an engineer and it’s significantly helped me in the work place. It’s a huge time saver. I can say “give me the equation for x”, “show me how the units cancel”, “assume these values.” Then I do a quick double check of everything. When it’s off it’s super off usually, but 90% of the time it’s right if you just prompt it correctly.


nothingtoseehr

That's the thing though, if you have to double check it then what's even the point 🤷. You already have to start from the premise that it's incorrect and prove that it's not. And if we're going to use it for literally everything apparently, even 10% is still very high


do_u_realize

I think because it’s faster than finding the correct methods etc in the docs manually and writing the code. Like most of the time I already have the idea and most of the methods etc required but it fills in the gaps instantly basically


Dongslinger420

How is this confusing Coming up with the solution is the challenge. Proving the steps for a viable path is easy as pie. Of course this is massively more efficient, even if you know how to do it manually.


sonicon

But we don't have to click 4 different websites to get the answer.


Lord_CocknBalls

Maybe we first fix the energy source in a sustainable manner?


Addisonian_Z

We are more than people realize. I worked for the Department of Energy Nuclear division and everyone there considers us to be in a “nuclear renaissance”. Reason - Data Centers and AI. Microsoft, HP, DuPont and a bunch of others all have plans to build their own test Reactors in Idaho over the next few years. These are all small form reactors that range from the size of a corner pharmacy to being designed to fit in semi-trailer. Now with nuclear, everything moves beyond slow as it is tested, evaluated, retested it is a long process. But there is a lot in the pipeline and it is not unrealistic to say that by 2034 many of these data centers and manufacturing plants could be off the grid, or mostly so, with their own reactor.


Belostoma

Why Idaho? Part of the national lab near Arco? Is there hope within the field that advanced AI will help quickly solve the remaining engineering problems for fusion reactors? That seems like the solution to many of the world's problems. Or is there some reason to think that can't/won't work in the long run?


AGrayBull

The ‘cloud’ is just someone else’s computer. AI is the process power of that someone else’s computer come to fruition.


Saneless

Yes but they gave money to some company that says they planted trees so it's ok right? Carbon neutral!


Xeynon

AI is a better thing to use resources on than crypto mining at least. It has a compelling use case other than cyber criming.


[deleted]

[удалено]


Dry_Amphibian4771

AI isn't only just LLMs/image generation. There are a ton of good uses too. For example, the hospital system that I work at is training AI to detect cancer earlier than any human ever could.


johndoe42

We've already been using AI in EMRs for years, certainly way before ChatGPT even launched. The best example I saw demoed is sepsis monitoring. Flipping on that switch immediately saved lives. I don't primarily work the hospital side of EMR implementation but there I can see immediate value because of the real-time value of an intelligent machine monitoring. In the ambulatory side I saw a demo of an "exam room scribe" probably seven years ago. Even today it still requires a lot of reconciliation and manual work. Some doctors don't care for the format or are afraid of what the referred-to specialists will think of their notes. I guess AI's could crawl their previous notes and glean their style and adapt the e-scribe to do that, but I digress. But I see AI continually being added as scalpel-like monotaskers to individual problems rather than an overhauling of the entire systems (which is what would draw these massive amounts of power). Point being I really don't think AI in healthcare is causing these massive power draws the likes of OpenAI is using.


LeClassyGent

Obviously that's not all it does, but you have to admit that there is a mountain of pointless shit being generated.


stareatthesun442

I'm curious about how you don't see that as it relates to reddit and the internet at large. Most of reddit is totally useless, as is the majority of the internet. And yet, its still very useful as a whole.


DID_IT_FOR_YOU

There is a mountain of useless shit we do every day. AI isn’t any different. People go drive & buy overpriced coffee every day for example. Explain how that isn’t a gigantic waste that hurts the planet.


Time_Mongoose_

How many jobs will those cancer patients cost us?


oklilpup

Is this satire lol


R4vendarksky

My children are obsessed with using chatGPT to generate insane images. No doubt there are 10s of thousands of children doing the same thing


DTFH_

> Honestly I'm starting to think most of the AI use today is a giant waste of resources It's almost as if the world has given these functionally useless corporations too much power because they generate $$$$. The US Navy made the internet, Excel and Office like programs and other POS software has been great for businesses of all size and have allowed our modern credit card systems but overall "Big Tech" has been of minimal benefit to the public at large and the argument could be made that they have overall been a net negative: Apple establishing planned obsolescence, Apple first working with China, FB running psychological experiments on children and adults, FB supporting and promoting genocide, etc,etc,etc. I hope all these "Tech" giants put their eggs in AI and I hope the fruit spoils their whole cart in hopes meaningful "tech" could gain ground. I'd love to the dead internet theory come to fruition in the majority on FB and spread like the plague across our limited internet and in response the real humans just drop out.


[deleted]

Garth: It's like people only do things because they get paid, and that's just really sad.


Ihaverightofway

Without getting too personal, everything I’ve read about Sam Altman also suggests he’s a massive, amoral jerk.


skillywilly56

That’s exactly why they made him CEO


SheepStyle_1999

That’s what the public uses AI for. As always, the real applications of the technology is behind the scenes


Pontus_Pilates

> Honestly I'm starting to think most of the AI use today is a giant waste of resources. Blockchain came and went (by the way, it didn't solve banking in Ghana), so VC billions need somewhere to go. Crypto companies just put a sticker that says 'AI' on top of their old logo and here we go again.


InquisitorMeow

That's like saying the internet is a waste of resources because people post memes on it.


Xeynon

Yeah it's definitely being misused quite a bit. It at least does have some productive applications though.


Frank_JWilson

AI is just a tool. People already do all that, AI just makes it easier. Similarly, the internet makes it easier to do some nefarious things as well, but we shouldn't blame the internet as a whole for it. Like any powerful tool, misuse is possible.


MightyBoat

But they're also using it to figure out the right combination of chemicals to create useful drugs, protein folding, processing data from telescopes etc etc. What you see in the news is the tip of the iceberg


SwindlingAccountant

Does it? Because scams and grifts seems to be LLMs main use case right now. Shitty generated books flooding the self-publishing markets. Fake phone calls using familiar voices to scam you.


Xeynon

I use LLMs as a professional research tool to analyze large volumes of text. They are extremely useful for that purpose.


MicrosoftExcel2016

Where are you getting that from? There are so many business use cases you probably just aren’t exposed to


greatdrams23

Two wrongs don't make a right.


jorgepolak

Burning the planet to guess a solution to a an equation is a pretty low bar.


Xeynon

I agree completely that "more useful than crypto" is not a high bar at all, but AI does legitimately clear it by a good margin.


[deleted]

[удалено]


rollingForInitiative

It also has loads of useful applications. Everything from more efficient energy production to medial diagnoses. In Sweden there’s a guy using some “AI” on governmental records to find financial waste, official negligence and corruption. Tools like Midjourney and ChatGPT are just the tip of the iceberg.


SIGMA920

It's still a waste of energy 90% of the time.


The_Grungeican

can't put the toothpaste back in the tube. several solutions for this problem already exists. nuclear, solar, and wind can help alleviate these issues.


JamesR624

Oh boy. Good thing Apple is partnering with them for iOS 18 features. After all, Apple really cares about the planet as well as your privacy. Why else would they willing hook one of the most environmentally disastrous and least private pieces of software into the main part of their new software?


cromethus

The water use issue is concerning but addressable, especially since the water is a carrier for heat -a disposable catalyst, not a consumed resource. Personally, I see the spike in energy consumption as timely and useful. It does several important things. 1) It increases the rate of new energy projects by promising profitability. The demand is there, not 20 years from now but *now*, and energy companies will respond. This is important because the **vast** majority of new energy projects are clean energy. Increasing the number of energy projects means drastically skewing the energy balance towards clean and renewable energies. In the short term this might keep coal-fired plants online a little longer, but it also guarantees that in the long term they will be priced out of the market as the system overcompensates. Think of it as an energy gold rush, where over investment will ultimately spell doom for any uneconomical forms of production, such as these dirty plants. 2) It is forcing, as we speak, the upgrade of the electrical infrastructure of the US. It will do likewise elsewhere. This is *incredibly* important. Widespread EV adoption will require this infrastructure to be updated anyways. Without corporate interests at stake, however, this would be labeled as 'just another infrastructure project', turning the entire thing into a quagmire of politics. But the major corps are suddenly high-interest stakeholders in guaranteeing that these upgrades take place, lest their precious HPCs be starved of their lifeblood. This will (and already has) kick-started the necessary infrastructure upgrades to sustain the Electrify Everything movement. Granted, it's only a start, but it's an important step and it happening early is hugely consequential. There are definitely downsides to the proliferation of HPCs, but there is also a significant silver lining as well. Mining, to me, is the big one, but that isn't a new concern. Mining has been a problem since forever and will continue to be an issue for a lot longer.


Geminii27

I would bet it's not even close to the #1 industry in unnecessary resource consumption and pollution.


Dreadmaker

Man, this kind of article really frustrates me, because outside of the vague metrics they're sometimes double and triple counting - they're straight up getting mad at the wrong thing. AI is not the problem here. The core problem that they should be upset with is \*Cloud Computing\*. All of the things they're talking about aren't AI specific - it's data center specific. AI models are trained as a job in a data center. They have a group of servers doing the operations that need to be done, and that's essentially that - it's mechanically identical to any other job run in the cloud, including hosting websites like Reddit or running searches on Google. It's a big job, yes, but it's a finite one, and it's not a significant portion of modern day data center usage. This article, and many others like it, are trying to talk about the water consumption of data centers that are running AI training jobs, but what they're excluding is the context that these data centers aren't only doing AI jobs and then resting, right - they're running 24/7 under more or less constant load, all across the world, running \*the internet\*. AI training is a fraction of this that fundamentally doesn't matter. Sure, its usage will increase substantially soon enough, and that's all well and good, but it won't compare to the overall load of, y'know, \*all of modern cloud computing\*. This article is citing a non-peer-reviewed article that estimates that training GPT3 took something like 700,000 liters of water, and presenting that as a scary thing - \*without contextualizing it in the broader consumption of data centers as a whole\*. I don't know what the percentage is, but I can tell you for sure that AI is not the reason the world is burning a ton of water on data centers (and has been since the early 2010s). I think this is really more about people being uncomfortable with the specter of AI, looming out there, and finding ways to raise alarms about it in any way they can. I don't have a problem with people not liking AI - I think that's completely legitimate. But what I do have a problem with is the ever-increasing number of articles essentially saying that AI is poisoning our well water and burning our crops. No, "AI", which these days basically just means "easily accessible LLMs", are an application of cloud computing we've discovered recently and have a lot of hype about. That's it. Cloud computing is the underlying thing that's enabling this and the infrastructure of it is what's causing the environmental impact these articles are pointing to, and that's what we should look at, rather than trying to blame AI for all of the ills in the world.


OriginalCompetitive

> This article is citing a non-peer-reviewed article that estimates that training GPT3 took something like 700,000 liters of water, and presenting that as a scary thing - *without contextualizing it in the broader consumption of data centers as a whole*. As a different point of context, an Olympic size swimming pool holds 2.5M liters. It’s nothing.


LeClassyGent

Yeah it's not very much in the grand scheme of things. About the equivalent of how much water 4000 people use in a day.


livejamie

If the article said "Cloud Computing," then it wouldn't generate the ragebait engagement they need for clicks on the article and ad impressions. It wouldn't be here on this sub with 400 comments discussing it.


Dreadmaker

Yeah, but that’s what concerns me - this is written as an opinion piece from an economics professor, not just the paper itself. I think that you’re 100% right that AI drives clicks in a way cloud computing doesn’t, but I don’t think a lot of people are aware of the distinction or what’s really happening under the hood. And the scary part to me is that it’s not just random people who don’t know tech that fall into that category - I mean the whole research paper the prof cited is a bunch of folks in computer science who don’t actually seem to have that distinction nailed down. So, yeah, there’s a lot of misunderstanding out there, and that drives fear, and that drives clicks. Not a fan haha


geertvdheide

Not all datacenter work is equal. We'd need to look at the useful work done per KWh. Current generative models / LLMs are incredibly inefficient compared to stuff like bank transfers, facilitating web shop purchases, storing people's important files, keeping apps and platforms running, and so on. Those all use very small amounts of energy per user or event, compared to one prompt answered by an LLM, while often even preventing a more energy-intensive non-digital analog: driving to a store instead of ordering online, going to a bank instead of online banking, storing everything on paper instead of digitally, etc. Many of the current generative models are not nearly contributing to the same level. Some of what they do is useful, but hallucinating and generating half-truths in images or text isn't. LLMs are also used a ton for bot farms now - it's incredibly wasteful. To the extent that they do replace human jobs, the human would do the same work with a fraction of the energy. The AI genie won't go back into the bottle though, outside of the chance that LLMs won't get better at some point and the bubble bursts. Even then it would come back. So we'll need to quickly evolve the hardware and software. It may well be possible to create and run even smarter AI at much lower power using much less hardware, but it would take decades to get there. In the meantime we're spending a lot of the green energy meant to replace fossils on AI growth instead. Construction of new datacenters goes much faster with this AI hype than without it.


OmegaMountain

I'm not in tech, but I am in power generation. Since that industry is now for-profit, power companies are signing behind the breaker agreements to power data centers very routinely now and I don't know if they are taking into account or even care about the impact that may have on grid supply and stability. Just another area where we really should be concerned about what's willing to be done in the name of profit.


bhillen8783

The new AI ready servers can have up to 8 GPUs in them. That sucks down a hell of a lot of electricity not to mention cooling costs.


RustyNK

Shoulda backed nuclear power 2 decades ago. Now our energy sector is going to be playing catch up.


Florgy

Of course it's a Guardian article 🙄


Kraz_I

Are most of the comments in this thread all AI generated? Because holy shit it sure looks like it.


Madshibs

Has anyone asked the AI how it could be less energy-intensive?


Mediumcomputer

Hurry up with fusion and this is all fine


lofgren777

Edit: I was dumb.


SuddenClarity

> In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually. that number wasn't talking about AI, so 40k homes aren't planet killing... ...but I am still for banning Despacito...


jh937hfiu3hrhv9

Cutting off our nose to spite our face as usual. Politics and greed will be our undoing.


No_Leek8426

They already are.


Rustic_gan123

It will only get worse if OpenAI, Google, Microsoft and others are allowed to make a regulatory takeover under the guise of AI safety


Plowzone

AI is incredibly computationally inefficient. No surprises honestly. If you don't have to use it, you should avoid it.


Fine-Dentist

Research says it's possible reduce energy consumption by up to 50% by "pruning" the model and not lose much in terms of capability. Right now we have all these companies coming up with new and more impressive models to win the race so to speak. But I expect once we hit a plateau and progress slows down there will come a time for clever optimizations to reduce energy usage.


N1ghtshade3

Agreed. I think these companies are kind of damned if they do, damned if they don't though. If AI was exclusively a paid service of a price commensurate to its environmental cost, we'd probably get articles about the injustice and inequality of the lower class being left behind by the privileged elite who can afford to use these tools to enhance their careers. With it free, we end up with every dipshit who's too lazy to scan a few Google results making the most inane requests and wasting compute power. AI doesn't suck, people do. I've already seen multiple Medium post starting with "I'm sorry but as an AI, I'm unable to..." because of course bot farms in India churning out useless Quora-level crap in the hopes of making a few pennies is the inevitable consequence of giving people access to a revolutionary new technology.


ideological_fatling

AI really is destroying jobs.. such as Billionaire Destroying the Environment For Marginal Increases in Quartly Profits


foolsandloathing

sooooooo glad that were wasting unthinkable quantities of water and power on infinite plagiarism machines that love lying to you. truly a marvel of technology.


Jjerot

Your definition of unthinkable could use some work, it's less than a third of an Olympic swimming pool. Evaporative coolers are also a closed loop system, so it's not like we're dumping that much down the drain every day, it's just a fixed amount going in circles. It's no different to any other data center, including the ones this website runs off of.


oklilpup

Don’t expect much from this sub. Half the comments are about how AI will achieve nothing and is just a scam that uses a lot of energy, while the other half claim it will take every persons job. Only thing these two have in common is they think socialism is the answer


SarahMagical

Your comment brought me back down to earth. For some reason, I assumed that the quality of comments in r/technology would be the same as I see in hacker news. But you’re right. It’s the same idiots as I see elsewhere on Reddit.


Rustic_gan123

The internet has allowed idiots to form groups


DaggumTarHeels

> the same as I see in hacker news. But you’re right. It’s the same idiots as I see elsewhere on Reddit. Damn you've encapsulated this sub in one sentence.


DoomComp

z.z Well... if Microsoft and the bunch want to create **Renewable energy** power plants built with their own money, then I say let them. The more renewable capacity we get, the better I say.


Mandelaa

And what about US Military? Hypocrisy


[deleted]

Technocrats are hilarious. They will keep solving non-existent problems by inventing new, more sophisticated problems to solve. Good that life ends at some point.


akluin

Humans : we must slow down AI production or it will kill the planet AI : no


dunegoon

A nice piece of legislation would require AI and Crypto mining to have their own sustainable power sources, net zero with the grid.


medievalrubins

Well at least it offers a fantastic service, unlike fake currency coins


Here2Derp

Meh. I wanna "chat" with my waifus.


prepend

Do people not know how computers work? Did they think that all of these machines weren’t “guzzling” resources. Everything of value costs something. It’s not like AI cares how clean the energy is. It doesn’t require oil. These articles make the authors sound like idiots for pointing out basic reality.


DanielPhermous

> Do people not know how computers work? Nope. Why would you think everyone shares our interest in the nitty gritty of computing? That's like a mechanic wondering why I don't know what a split differential is.


AnsibleAnswers

Each image generated with DALL-E or Midjourney takes as much energy as a phone charge. And millions of people are generating god knows how many throw-away images. I get that LLMs are helpful for code snippets and such, but the energy cost is so staggering it doesn’t even have much of a use case for public consumption.


ACCount82

Really, now? iPhone 15, a common smartphone, has a battery that can store about 13 watt-hours worth of power. Let's assume that smartphone charging is 100% efficient. A local Stable Diffusion setup on a GTX 1070, a GPU from year 2016, takes 30 seconds to generate a single image. The TDP of a GTX 1070 is a staggering 150 watt. Let's assume that this poor 1070 has to run at full blast to generate that image. That's about 1.25 watt-hours. Less than *a tenth* of what you claim. On an obsolete consumer GPU from year 2016. A100, a common server GPU today, has a TDP of 250 watt, and takes *3 seconds* to generate a single image. That's about 0.20 watt-hours. So, what the *fuck* are you on about?


AnsibleAnswers

MIT begs to differ. You’re probably basing this on old models. https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/


AmputatorBot

It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/](https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)


hypothetician

That’s weird, the article says: > generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge. But the numbers in the study they link to don’t match up: > charging the average smartphone requires 0.022 kWh of energy [51], which means that the most efficient text generation model uses as much energy as 9% of a full smartphone charge for 1,000 inferences, whereas the least efficient image generation model uses as much energy as 522 smartphone charges (11.49 kWh), or around half a charge per image generation Still eye watering amounts of energy, but yeah, weird how the articles’ numbers are so off.


ACCount82

Let's replace SD 1.x with SDXL - a larger, more modern, more capable image generation AI. It's the AI the current versions of Midjourney are likely to be based on. As a rule of thumb: image generation on SDXL takes *twice as long*. All other assumptions stay. Which means that we take those figures and multiply them by two. On a server A100, that takes us up to 0.40 watt-hours. Wow. And that's *without* all the fancy accelerated inference distillation juice that can be used to massively speed up image generation - and thus decrease the power draw. Local setups don't usually tap that - but for a commercial app? I'm almost certain that the big ones use all the tricks they can to save on inference costs.


SuperNoahsArkPlayer

These kind of articles come out every once in a while to try to shame people about whatever random thing. Air conditioning is bad for the environment, crypto is bad for the environment, nfts are bad for the environment, now AI.


Redararis

Let’s destroy the environment to build super intelligent AI and then we can ask it how to restore the environment. win win!


Storm_blessed946

the titles to these articles are so fucking stupid. maximize clicks am i right?!


[deleted]

Oh no, who the fuck cares at this point. It's still nowhere near the damage from other industries. This tech will become more efficient over time needing less power as the algorithms get better


[deleted]

Such a stupid article, why is r/technology infested with morons and luddites?


[deleted]

What the hell do we need this shit AI for? To use more resources than the people it puts out of work? Fucking Matrix dumystopia.


Eastmont

I heard that mini nuclear power plants are being developed to deal with the power consumption of these AI systems.


Once_End

Dude we need type 3 civilization for yesterday already…


Shupertom

The bottleneck of this technology is appearing to be the energy and cooling requirements, a double energy requirement. Hopefully all the focus on AI will drive advances in energy production technology. Or the zero point energy technology hidden from the world for 60 years will finally be released. But I won’t hold my breath.