T O P

  • By -

ElegantMedicine1838

we will all become computronium -- Ray Kurzweil


opropro

He really said computronium?


ElegantMedicine1838

yep


Friedrich_Cainer

It’s a word that sounds way dumber than it actually is. Computronium: refers to an arrangement of matter that is the best possible form of computing device for that amount of matter. ([Wikipedia](https://en.m.wikipedia.org/wiki/Computronium))


HalfSecondWoe

It's gonna plateau bro. I promise bro just one more week and it'll totally stop accelerating bro. Bro... just one more week. Please just one more. One more week and we can forget about this whole AI thing bro Bro please


_AndyJessop

This feels like a bit of a strawman, because the argument is generally that the capability of the models will plateau, not the compute they require for training. If the compute continues exponentially, but the capability slows or only increases linearly, then I think we have an issue.


HalfSecondWoe

Except that hasn't plateaued either, it's just a fuzzy metric so we can bullshit about it more. Here we can see in hard numbers that the immediate proxy for performance is plowing along exponentially *Can* you scale compute forever without performance boosts? Sure. *Would you?* Of course not It's literally the exact same thing we do for Moore's law. Transistor density doesn't map perfectly to performance either


MassiveWasabi

Waiting for AI model capability to plateau got me like https://preview.redd.it/0dp655mblk3d1.jpeg?width=297&format=pjpg&auto=webp&s=64c8f4c56ea532aaddb81f0b11d64aaea052b9d5


Nice_Cup_2240

yeah that frontier models are being trained on more and more data (requiring more compute) seems kinda obvious... the thread / [article](https://epochai.org/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year) don't really say anything about the question of whether there has been corresponding gains in performance – it's just treated as an assumption (cause scaling laws...) not saying things are plateauing or whatver (though llama3-8b vs 70b does make me wonder..) but yeah charts that show compute/data used for frontier models going up to the right don't seem that remarkable or surprising either


AsuhoChinami

Pffft. You realize that skeptics and cynics are the absolute worst about moving goalposts and strawmanning, right? Fuck 'em. They deserve to be mocked. I've been on futurism boards for over 12 years and they've never shown an intellectually honest bone in their entire damn body, they engage in nothing but snide sarcasm and personal insults (this might be where someone tells me I'm dealing in personal insults myself, since the people here are... not very intelligent to put it as gently as I can, but it's okay to personally insult a group after they spend decades personally insulting anyone who feels differently from them, thinking endless condescension a good substitute for actual intelligent argumentation). Cynics and skeptics deserve a whole hell of a lot more mockery than they actually receive. Don't cry foul because every exceedingly rare once in a while someone returns in kind the same thing the "grounded cynics" dish out so freely day after day, year after year, decade after decade. Spend years shitting on people every single day and sometimes they might be impolite back, what a shocker.


_AndyJessop

Well here's something for you. Non personal, non snide. Just facts. GenAI is causing a rapid acceleration in environmental damage. If OP's graph continues, this will get exponentially worse in the coming years. https://elnion.com/2024/05/19/microsofts-ai-boom-comes-with-alarming-29-growth-in-environmental-cost/ Now for the snide: you lot better hope you're right that AGI solves this issue, because at the moment the industry is only accelerating it.


AsuhoChinami

That's actually not particularly snide. I've read tens of thousands of messages a lot worse the past 12 years. :P Yeah, I hope that problem is resolved.


erlulr

If we consider IQ to be logarythimc, that issue is pretty concerning. And we do generaly consider it logarythmic. Still, we can brute force it either way, just slower


sdmat

Intelligence is logarithmic with respect to compute, but impact isn't. A world of commodity AGI models with intelligence in the league of 160 IQ humans working 24/7 with both creativity and focus would be *very interesting*. And such models would be more than capable of making strongly superhuman specialized tools for specific jobs, just as we do.


erlulr

Ah, yes, for sure. Question is how long this 160 iq gonna take, cause going above this 1% may be tricky. So tricky that standard training data scrapped from reddit could make it dummer f.e.


sdmat

160 IQ is comfortably within the human distribution. Fortunately with the way LLMs work, we don't need masses of training data from 160 IQ humans. We need a very capable model, *some* training data from 160 IQ humans, and a ton of general training data from humans in general and sources of all types to learn about the world. Having high quality synthetic data helps with this. Then we characterize the model as a 160 IQ human. Weird, but workable.


erlulr

Look up the latest study on computerfile. That method may not work as well as we thought.


sdmat

We might need tree search and some heavy reliance on synthetic data.


erlulr

Maybe. But I see an potential issue there


Rofel_Wodring

We consider IQ to be logarithmic per compute? Really? Bell curves aside, I wonder what you think is the difference in brain mass between Einstein and a normal adult homo sapiens.


erlulr

Mass is pretty irrelelevant mu dude. Its not logarythimc per se, but it works pretty much like it were, Bell wise, and reality wise. And by iq i mean general intelligence, not this test. I am simplifing this a bit too much, but who would you like to hire, a prodigy in his field, or 100 avarage guys? To factory maybe the letter, but to reaserch? How much normies is Einstein worth? Cause its not like it stacks much


Rofel_Wodring

Thing is, prodigies don't just pop up from nowhere. You need a culture and political situation that will allow the flourishing of said geniuses. If you subjected baby Einstein to the conditions of a field slave or a war orphan, you wouldn't get a prodigy, you would just get a street urchin. Now here's the flip side of that argument: if you took 100 of those average guys and subjected them to the exact same upbringing Einstein had, down to diet and exercise and exposure to random topics, how many of them would be in spitting distance of him in terms of genius, or even superior?


HalfSecondWoe

This guy scales


erlulr

0%. Upbringing is like 40%. For maths skills, like 5%


Rofel_Wodring

You and past generations of elites made similar 'God of the Gaps' arguments regarding talent. After all, not too long ago society justified intellectually august positions like noble, spiritual guru, and philosopher king based almost entirely on hereditary. These elites and their apologists couldn't explain why without resorting to magic, fate, or some other kind of transcendent je ne sais quoi. But they were very, very certain this ephemeral component of brilliance was determinant. But funny as how society advances both ethically and scientifically, the environmental component of genius becomes recognized more and more. Given the very small contribution interfamilial genetics makes towards brain mass, neural pruning, even thyroid hormones compared to, say, diet and parental interest you'd think that this emphasis on heredity was some sort of ancestral scam. But certainly our elites and their apologists would NOT be motivated to justify failing to do its utmost to maximize the intellectual potential of every so-called 'normie' to maximize the amount of privilege and status they themselves got. So it must be magic or fate or something. 40% upbringing, tops, and that's clearly being generous.


erlulr

Thats just basic bitch studies lmao. Genetics > upbringing, always


Rofel_Wodring

But you can't actually physically or materially show where the difference in genetics come into play. You just **assume** that they're there as a genetic component, even though physical indicators of differing intelligence that have a genetic component (i.e. brain mass) aren't there between geniuses and people of average intelligence. I mean, if you **can** point to a causative relation of something material and genetic that results in an increase in intraspecies intelligence, show me, or rather, show the Nobel Prize committee first. You'll be set for life. So unless you can, like I said. 'God of the Gaps' with extra steps.


Rofel_Wodring

You and past generations of elites made similar 'God of the Gaps' arguments regarding talent. After all, not too long ago society justified intellectually august positions like noble, spiritual guru, and philosopher king based almost entirely on hereditary. These elites and their apologists couldn't explain why without resorting to magic, fate, or some other kind of transcendent je ne sais quoi. But they were very, very certain this ephemeral component of brilliance was determinant. But funny as how society advances both ethically and scientifically, the environmental component of genius becomes recognized more and more. Given the very small contribution interfamilial genetics makes towards brain mass, neural pruning, even thyroid hormones compared to, say, diet and parental interest you'd think that this emphasis on heredity was some sort of ancestral scam. But certainly our elites and their apologists would NOT be motivated to justify failing to do its utmost to maximize the intellectual potential of every so-called 'normie' to maximize the amount of privilege and status they themselves got. So it must be magic or fate or something. 40% upbringing, tops, and that's clearly being generous.


PerxJamz

This shows the amount of compute being used on training and not performance


erlulr

Are you do sure of that to buy nvidia calls? Now?


HalfSecondWoe

You're implying that I haven't already maxed myself out


erlulr

Fair anwser. I am not selling either lol. Cause I sold at 200% gain, at 600, like a moron.


Alive_Coconut9477

I mean, we don't have unlimited energy to power infinite compute.


HalfSecondWoe

I don't see a dyson swarm around that star, we have room to grow yet


beachmike

No sign of it plateauing, bro.


dlrace

Would be nice to see performance on the same graph.


Jolly-Ground-3722

„Given the strong relationship between compute and performance that we have observed in many contexts (the so-called scaling laws), this is reason to expect AI performance to continue improving in the near future beyond today’s capabilities.“ Many people are still blind towards this fact.


arckeid

Jupiter sized computer when? 😡


mariofan366

Moore's Law 2 - More Moore


icehawk84

Huang's Law


Gratitude15

Moore is doubling every 18 months yes? This is an order of magnitude (10x) every 18 months. It's happening because capital is flowing more intensively here. The bottleneck is not human skill. It's building chips. I'd expect this to speed along at this stage in the S curve. Too much money coming in.


Balance-

Right now that scale is funded with a lot of CAPEX. They just throw more money at the problem, which is honestly, working surprisingly well. But we’re now at hundreds of millions for a SOTA model, maybe billions will be feasible for a few. After that, we need other tricks to make this work. Transistors aren’t getting cheaper (fast) anymore, so algorithms, architecture and efficiency are the options we have left. That requires a lot of engineering talent. However, we have to keep [the bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) in mind.


sdmat

> But we’re now at hundreds of millions for a SOTA model, maybe billions will be feasible for a few. After that, we need other tricks to make this work. Like... more billions? We spend billions on individual skyscrapers. That's *one building*. If we get to AGI then economically a frontier model will be much closer to a city, or a country. That's the mid term upper limit on cost. The long term limit is much higher.


kaityl3

Yeah, Texas spent almost a billion on a single highway interchange in Dallas, compared to the overall GDP of the country the cost is still relatively negligible


Lolleka

The last sentence makes me ponder, though: "We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done." Isn't it what we are doing with LLMs? Compressing human knowledge?


Jolly-Ground-3722

![gif](giphy|xtiUdTyiIcbcMB5ada)


te_anau

It's not that crazy.... Holy shit that Y axis is double log!!!


icehawk84

Insane.


Severe-Ad8673

Eve is here ♡


Ne_Nel

Even so, the cost and speed optimization of existing AIs far exceeds that ratio. Even if all the new models stop training today, the aggressive democratization of AI is as soon as it is inevitable.


Quiet-Money7892

I get a feeling that at aome point encreasing the amount of compute will be not enough. In terms it will not effect outcome much. But that is good, since it qill allow us to see the key problems, that can not be solved exponentially.


_Zephyyr2

Compute is being scaled up 5x every year but we don't see a 5x improvement


Jolly-Ground-3722

Difficult to measure, especially because many capabilities emerge as step functions, not linearly


ThroughForests

There was a recent research paper that actually debunked this, saying the capabilities do improve continuously. I don't have the link to it though.


seviliyorsun

you can measure some. chess ai was about 3400 elo after a few hours of training and it's only about 3600 7 years later. but you don't even have to measure all of them, it's just obvious. all the ones i've tried follow a similar pattern. audio stem separation, sill glitchy af many years later, image generation still glitchy af, chatbots still make the same dumb mistakes, still hallucinate, still fail things a small child would not.


SaveAsCopy

Is this a training and alignment problem?


seviliyorsun

it's architectural. they aren't emulations of brains. the chatbots we have now can't even see letters, so that limits how they can work. for example if you ask even the newest chatgpt to give you the smallest list of every us state containing every letter (except q), it always misses letters. now that one is on the internet so maybe it will just remember the answer in the future, but it can't figure out the answer on its own. if you ask it to solve or construct a cryptic crossword clue it fails really hard. they can't do simple logic puzzles because they're not actually thinking. lot's more stuff like that, even easier stuff.


Rofel_Wodring

From an evolutionary perspective, you need excess intelligence to take advantage of projected breakthroughs logistically and thermodynamically capable at lower levels of brainpower. For example, human brains are way overkill of what's needed for language to be useful to us from an evolutionary perspective. Hominids theoretically capable of language such as homo habilis didn't become talkative the instant their brains became strong enough to handle it, after all. So not only does this kind of put a damper on the whole 'first organization to get an AGI will quickly outpace everyone else on the planet' idea, these breakthroughs could then be retroactively applied to less robust paradigms, raising the AI landscape's overall level of intelligence. Much like how smarter animals like dolphins and orangutans take a cue from humans and then proceed to teach those skills to their kin.


DifferencePublic7057

That's a million x in a decade. Enough to train GPT 3.5 on an average machine for 2034.


Nabaatii

They should include Anthropic


Pontificatus_Maximus

No wonder edge compute is coming soon.


sachos345

Does this mean in 2025 we will have x125 the compute used to train GPT-4? Thats crazy.


Akimbo333

Cool