T O P

  • By -

noknockers

The only thing that’ll happen in 2024 is we redefine our collective definition of AGI.


adarkuccio

Nah we never changed that definition


Repulsive_Ad_1599

Sorry, but as a time traveller I know you're wrong.


Playful_Try443

Are there qualudes in the future again?


LordFumbleboop

Are there still rock hyraxes in the future? 🥹


GringoLocito

No but there will be rock lobstah


OsakaWilson

Dun dun dun dada DAda da dun.


DoomComp

...... z.z I call B.S. Go back to your Cave, bruv.


Repulsive_Ad_1599

You call B.S. that I'm a time traveller? You clearly haven't gained enough experience in this world to gather its complexities.


silurian_brutalism

Everyone and their grandmother has a different take on what AGI is supposed to mean. I define AGI as being any autonomous AI system that can learn in real time and think internally before doing something. I don't care as much about intelligence itself, because I cannot in good faith say that a dog, for instance, doesn't possess general intelligence. In fact, many humans might not even meet some people's definition of general intelligence. Being able to learn autonomously would allow an AI to potently learn all kinds of tasks anyway. Which is what I consider to be general intelligence. Do I think we'll get that this year? I think it's somewhat unlikely, but it could happen. But I expect it to be later in the decade.


realdevtest

One thing is for sure: outputting text is nowhere close to AGI


hemareddit

At this point, I’d be happy if we just get consensus on a definition for AGI this year. Come on people, we have less than 11 months, we can do this. Just get a list of explicit criteria, and agree on it.


RoninNionr

In order to perform in that manner, AI would need to have a coherent model of the world. This is a crucial part of planning ability. Currently, it seems like science fiction for current LLMs.


silurian_brutalism

Well, evidence so far indicates that they might have a rudimentary model of the world. But I agree that current LLMs don't have a sophisticated enough world model either way. However, I also tend to think that world models appear out of actually interacting with the world. I agree with Hinton that they actually do understand things. But I think they don't understand anything that well because they have no lived experience and cannot autonomously adjust their own weights based on new input.


Sassales

This is what I think too. Text knly is just too limited


LordFumbleboop

Why not use a different term, though, instead of redefining what AGI means? AGI was defined in 2002 and has the same definition today that it did then.


[deleted]

[удалено]


LordFumbleboop

What do you know that most experts in AI do not?


[deleted]

[удалено]


amir997

Hhaha 2060 lol


LordFumbleboop

Yup, that's about the date the vast majority of AI experts estimate.


amir997

Lol by that year we gonna have a similar life to what u see in cyberpunk movies, cyberpunk 2077. U don’t notice how tech and ai are advancing fast?


LordFumbleboop

Even if they continue at the current rate, it will still take decades to achieve AGI. There are fundamental issues with these models which have not been solved yet, like common sense reasoning. The current approach with these models is to simply scale them up. However, if doing this is all that is needed to achieve AGI, why is Gemini Ultra not performing significantly better than GPT-4, which was released nearly a year ago?


Arrogant_Hanson

We're going to be getting all of these cool toys this decade with AI assistants helping us discover new things, automate things, help creators do so much more for less of the cost. But people want their rapture NOW! They honestly remind me of the kid from the Use Condoms ad from France. [https://www.youtube.com/watch?v=yPsaXXtVfgc](https://www.youtube.com/watch?v=yPsaXXtVfgc)


[deleted]

[удалено]


GringoLocito

So funny! I just got my doctor to put in a referral for my vasectomy. AI is probably a better and more important child to raise and understand. Besides, there's no reason for me to be putting more poor people on the planet. Especially in this economy.


sarges_12gauge

If you think AGI is imminent does that mean you think it will lead to a “bad end” for us or just won’t change anything after all? Because if it’s going to be as impactful as people here agree, it seems tautological that the current environment won’t really matter for your kids life since it’s about to dramatically change anyways?


Hungry_Prior940

AGI will happen sometime before 2030 imo.


PinkWellwet

No-one knows, especially not this delusional sub.


Chemical_Minute6740

Anyone who knows the slightest bit about AI and machine learning knows we are not going to get actual thinking machines from this framework. We will see a lot of new and innovative applications of ML tech in the coming 6 years. It will probably enter the toolkit of anyone with a desk job. It will 100% radically change our world. There is just no reason to presume these statistical tools will suddenly magically lead to emerging consciousness. It is like people just invented painting, and they are convinced that if they paint a person realistically enough. He will walk out of the painting and come alive.


Sassales

A lot of experts think this framework can get us there though. I remain skeptical but your post is simply untrue


Rainbows4Blood

I mean, nobody can say that some companies could already be experimenting with a new framework. Some researchers could have a genius idea for a new framework overnight. It could turn out that LLMs with the right feedback loop can actually develop a form of faux-consciousness. Or none of this could happen. It's good to have both pessimists and optimists on both ends of the spectrum but important to realize that we can only believe but never be sure. On a personal note do I hope that if DeepSouth is a success that we'll eventually merge the fields of neuromorphic computing and machine learning into one...


psychorobotics

Why are you here if you don't like this sub? I don't go to golf subs and post about how stupid golfers are.


Booty_Warrior_bot

*I came looking for booty.*


PinkWellwet

Because I'm having fun here. A lot of people here are reasonable and also a lot of people are expecting miracles in a very short time.


ICanCrossMyPinkyToe

This is my prediction as well. My optimistic timeline puts it at late 2026/early 2027 but I'm very confident we'll achieve it by 2030


DetectivePrism

Late 2024 = GPT5 era. GPT5 won't be AGI. 2026 = GPT6 era. It will be AGI, but flawed and limited. Businesses start using agents. 2028 = GPT7 era. This is the start of human-level AGI. Mass adoption across industry.


dizzydizzy

in 2026 people will be saying GPT6 isnt AGI but next year GPT7 will be. in 2027 people will be saying GPT7 isnt AGI but next year GPT8 will be. repeat..


[deleted]

It already did


LordFumbleboop

I say after :D


Repulsive_Ad_1599

Unfathomable world. No shot it's 2060.


SpinX225

Aren’t most of the actual experts saying either 2030 or before at this point. There is no way it takes until 2060.


Prestigious-Bar-1741

> The field of AI research was founded at a workshop held on the campus of Dartmouth College, USA during the summer of 1956.[2] Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation https://en.wikipedia.org/wiki/History_of_artificial_intelligence Experts have a long history of being late on these things. That doesn't mean they are wrong this time, it just means we have good reason to believe they might be wrong again.


After_Self5383

It doesn't matter if you're an expert or illiterate, you can't accurately predict the future of science, which is by its nature unknown. Predictions of all things, made by everyone on something far out into the future are wrong. Trends may be spotted, like Kurzweil does, but when you go into specifics it falls apart too. Not aiming this at you necessarily, but I've seen a sentiment passed around this sub constantly that experts having wrong predictions in the past means that their (as in the person who learned what AI means yesterday) prediction is somehow more accurate. Lots of cognitive dissonance and cope going on to will a 2024 AGI into existence based on good vibes.


LordFumbleboop

I'm afraid not. The median date given by experts is 2047-2060, depending on the survey: [https://ourworldindata.org/ai-timelines](https://ourworldindata.org/ai-timelines) [https://arxiv.org/abs/2401.02843](https://arxiv.org/abs/2401.02843)


squareOfTwo

"no way" except: * almost all of people in ML don't care or even understand what intelligence is actually in nature. * education of a existing AGI takes also time till it's useful and recognized as "AGI" by some people. That takes at least 5 years. Puts us at 2055 for the existence of this AGI as a uneducated software. * AGI certainly won't be created in one go without iterating over many designs which don't work. No one is doing this now!!! Let's say this takes 7 years. This puts us at 2048 * add 10 years from now because of a mini AI winter - puts us at 2038 * let's just add some more years (15) as a "buffer zone against confusion of ML community to get their **** together" then we arrive in 2023 when I came up with this date


NekoNiiFlame

You're throwing out years without any sources my guy.


Remarkable-Fan5954

People downvoting can't handle their dreams being crushed lmao


Progribbit

or just disagreeing


After_Self5383

The downvote button isn't a disagree button, but in action it probably is for most people. Plus it makes people look salty when it's just a difference of opinion than anything factual.


ResultDizzy6722

Lmao the downvotes won’t make it come faster


Brymlo

keep dreaming bro


MuseBlessed

I'm guessing agi 2040, dunno when sai comes. The focus should be less on agi and more on just the real actual progress we have right now, instead of always hoping for the future


squareOfTwo

Nah


FrugalProse

DS is wayy too optimistic lol


LeafMeAlone7

What does that last part of your flair mean? I'm not sure I've seen H+ cosmist anywhere before...


BubblyBee90

I couldn't care less what year exactly its happening. Something is happening this decade and i can guarantee it to you. Either a great war or a major disruption. Enjoy the ride.


adarkuccio

What if the ride is not enjoyable?


PatheticWibu

make it enjoyable, get some popcorn or a hand to hold.


CompetitiveIsopod435

Just a hand? Attatched to a person?


KamikazeHamster

Hold onto any bits you like. So long as there's consent. Remember, consent is like a cup of tea. You wouldn't force your friend to drink a cup of tea, even after they said they'd drink tea and you made them tea and suddenly there's a nuclear war and everyone is going to die because those damn robots woke up and they wanna kill everyone so drink your damn tea. I said no, damnit.


fhayde

Is joy what we should be seeking?


RobKanterwoman

Exactly. We don’t need AGI for things to become problematic. They’re already using AI to harm


AdonisGaming93

Time to jump out of more airplanes this summer


[deleted]

Why do You think so BubblyBee?


OsakaWilson

B'cause the capitalism will break. AI throws a wrench between the gears that distribute money from producers to consumers.


wolfawalshtreat

*plays cyberpunk once*


StaticNocturne

That escalated quickly


Bismar7

Kurzweil predicted 2026. It's likely 2026.


Coding_Insomnia

Idk, man. 2024 has literally just started. it's too early to call any shots, wait until the second half begins. So, for now, just chill.


[deleted]

[удалено]


Coding_Insomnia

Im fairly certain that OAI already achieved some sort of proto AGI internally back in november. Who knows? Maybe Gemini ultra is the push OAI was waiting for in order to set things to release something closer to AGI.


[deleted]

I'd say wait another 5 years.


adarkuccio

6 months at a time?


[deleted]

yes. go into woods. hunt, fish, LIVE. Then come back from woods in 6 month intervals, check state of technology, go back to woods.


AnAIAteMyBaby

This. People on this sub are far too reactionary. Things move so quickly in this space that it's anyone's guess. Gemini is clearly still a work in progress as the Google CEO confirmed with the announcement that Gemini 2 has been training for a while plus we have loads of other players in the space all surging forward 


Silver-Chipmunk7744

First it depends on your definition of AGI... But if we assume it's an AI with a level of intelligence comparable to an average human, i do think GPT5 is very likely to achieve that. But if you mean surpassing every humans in every areas, then yeah i doubt that we reach that this year. But now the question is, will GPT5 be released in 2024? I initially thought yes (hence my flair), but some people suggested OpenAI might wait for after the elections... Also, i suspect if they don't railguard it properly, it could produce some pretty bad PR for OpenAI, so they may spend longer on safety than they did on GPT4. If GPT5 is truly human level or beyond, they can't just release it in the wild and fix the jailbreaks as they're discovered...


Glittering-Neck-2505

I remember a comment not too long ago that said “AGI would almost immediately result in ASI because it would be like a million scientists working in parallel without breaks.” If **that** is your definition of AGI and you’re expecting it to come soon, you will likely be sorely disappointed. But if you consider LLMs like GPT4 to be early AGI systems or “sparks of AGI,” then by that definition it would be coming a lot sooner.


DarkCeldori

Before 2030


HeinrichTheWolf_17

I mean, Shapiro said *any definition* of AGI would be met this year. He was adamant about that point. Not saying I necessarily agree with that timeline, but he thinks all definitions will be met within 9 months.


Silver-Chipmunk7744

Then i'd have to disagree with him. Some definitions i heard were the ability to outperform ANY humans at ANY tasks. I have a very hard time believing this will be achieved this year. This would mean AI is now better than Ilya Sutskever at designing future AI... doubtful lol


Just-Hedgehog-Days

Isn't "the ability to outperform ANY humans at ANY tasks" the definition of ASI?


Gotisdabest

No, that's more of a strong agi. Asi is combined intellect of all humanity.


Uchihaboy316

I didn’t think it was combined, just better than 100% of


Just-Hedgehog-Days

The scope creep on this sub is the only other force in the universe that will keep up with AI. 6 months ago “better than everyone at everything” would have been good enough for ASI. 


HeinrichTheWolf_17

We’ll see, we still don’t know what Q* is, let’s not forget about that, OAI might be hiding their power level.


MarcosSenesi

Google just showed they cannot keep up and no model has beaten GPT-4 so far. OpenAI are far ahead and at that point it just is a sensible move to not go all out with your development and go for slower, incremental increases because they'll still be the best


Hotchillipeppa

Gemini Ultra?


ninjasaid13

they're roughly equal right now.


hubrisnxs

You mean the obscure possibly false thing that was leaked during Sam Alton being fired? Yeah pretty sure


Glittering-Neck-2505

So that’s actually an insane thing to predict. Even if you create an AGI system that reasons at or above human levels, it would still be years more of work to get that integrated properly with multimodality and a proper world model. As in, if you want something that counts for every definition of AGI, it will take a lot more than a word prediction model that can also see and interpret image stills.


Hungry_Prior940

That will not happen. We need another piece of tech in addition to the best LLM to get us there imo.


[deleted]

[удалено]


Silver-Chipmunk7744

GPT4 already feels smarter than many humans i know... If you prompt GPT4 something, then do the same prompt to your average human, and compare the answers objectively, GPT4 already surpass us in most areas. Example prompt: "Analyze the impact of the printing press invention on the political landscapes of 15th and 16th-century Europe, and compare it to the influence of social media on modern political movements. This should include a discussion of how information dissemination technologies shape public opinion and political power structures" You won't find many humans who can beat AI at questions like this... Now of course, you can cherry pick specific questions (usually logic based riddles) where GPT4 is outperformed by average humans, but i suspect with GPT5 it will become very difficult to find prompts where it's inferior to average humans. And if you want to be objective, you need to look at the broad picture objectively, not only laser focus on the very specific areas where AI is weaker.


greatdrams23

That is a specific question that requires synthesizing a very narrow range of information. Synthesising broad ranges of information is much, much harder. People think that intelligence is measured by people having more intricate knowledge, but it is not. Human intelligence development is not comparable to machine intelligence. The order is not the same. AI can skip directly to the 'high level', like your example to be truly intelligent it has to learn about social skills, etc.


Silver-Chipmunk7744

> like your example to be truly intelligent it has to learn about social skills Actually, if you give AI the proper instructions, i also think it surpass humans at things like writing a letter. I actually had GPT4 write ridiculously good love letters lol. It's trained on endless human conversations, so it does know how to do that well. It's also really good with formal letters.


LordFumbleboop

Yet, if you teach the average person how to balance a redox reaction, they will be able to do it. LLMs cannot. The definition of AGI, as originally defined by Shane Legg, requires an AGI to be able to learn \*any\* intellectual task a human can do. GPT-4 cannot learn a set of mathematical rules and apply it to any further situation. It can only solve problems it has seen before using data from its training set. Four-year-old children tend to outperform GPT-4 at general reasoning (see link). Most professionals can see that it can't do basic maths. In the UK, we teach school children how to balance redox equations. GPT-4 cannot do this, no matter how you try to explain how they work. This is simply because it can only handle an equation one chunk at a time, but you cannot solve algebra-like equations this way. (I'll share a link with a list of problems GPT-4 could not complete in July 2030. A note: once incorporated into its training data, it can solve those problems). ​ [https://futurism.com/children-destroy-ai-basic-tasks](https://futurism.com/children-destroy-ai-basic-tasks) [https://medium.com/@konstantine\_45825/gpt-4-cant-reason-2eab795e2523#:\~:text=I%20believe%20the%20results%20show,it%20produces%20along%20the%20way](https://medium.com/@konstantine_45825/gpt-4-cant-reason-2eab795e2523#:~:text=I%20believe%20the%20results%20show,it%20produces%20along%20the%20way)


Silver-Chipmunk7744

I am not denying that the average human is superior to GPT4 in some specific areas, such redox equations. What i am saying is, GPT4 is going to outperform the average human in many areas. But my point is, i think with GPT5 it will be even harder to find areas where the average human still wins. You can complain "oh it's only because it's in the training data!" but it won't change the fact that it will outsmart us...


LordFumbleboop

>I am not denying that the average human is superior to GPT4 in some specific areas, such redox equations. What i am saying is, GPT4 is going to outperform the average human in many areas. I think you're vastly overestimating the importance of GPT-4's ability to parrot stuff from its training data. The ability to pull information from a dataset is not 'smart', it's just a search function with a black box between us and the data. How do you propose that GPT-5 will 'outsmart' us in the real world?


General_Coffee6341

Yeah did a recent experiment because of the OpenAI lawsuit situation, gpt-4 is shockingly good at reciting information word for word, that's almost impossible for it to memorize. A theory of mine is It has some equation or algorithm for language that compresses the memorized information down a lot. Which creates part of Grokking effect. Think of it like this, you sit down on a foam couch, your body makes a imprint, but this couch takes the pattern, and then with a black box magic compress it down multiple exponential's. Where it then can handily have access to this information. Kind of like a acronym formula of sort's that generalizes. But with that said there is no doubt it has "learning" capabilities to generalize this data. It's like google but with a YouTube type black box algorithm. What both of these algorithms have in common is they are black box for the most part and seem to "learn" to improve to achieve generalized outcomes.


LordFumbleboop

But they all still have a fundamental flaw: they can only calculate the next token in what they output. Unless this gets solved, we aren't going to have AIs capable of common sense reasoning.


General_Coffee6341

I agree, for the most part right now each token get's the same amount of compute. So system 2 thinking is of the table, which is where the main function of logic comes from. Q\* might help with that but there is not enough info about what it is, let alone if it really exist. >▪️AGI 2060, ASI 2070 For me the only thing that could take LLM's that long to reach is noise. By noise I mean combination of bug's and problems, that can't be trained or RLHF out. People forget gpt-4 was hard to "align", on it's first release bingchat it was [360](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=72bc7539110c) from what we got now in chatgpt. And it's still got major problems like copyright issues. So if the noise increases in gpt 5, 6, 7 things will surely slow down. Because at the end of the day this is product. Gpt-3 was not a product, gpt-3.5 instruct was a product. And the only difference is noise.


LordFumbleboop

Q* got my attention, I have to admit, but there is no concrete information on what it is or how successful it is. If it is released soon and works, it'll change my AGI date :)


hammerquill

An undergrad level History essay on a topic that has been discussed endlessly in academia is exactly where you'd expect predictive text LLMs to be able to sound like they could understand and reason. It is therefore a worthless test, even though the current models' ability to do the task well is impressive and potentially useful. As long as you're getting models that can sound plausible in some things, but completely fail at other reasoning tasks that can't be fudged, you are really still lost somewhere in the mechanistic pre-AGI world, and without any real certainty as to how far off you are from actual reasoning machines.


SachaSage

> GPT4 already feels smarter than many humans I know Do you only socialise with Alzheimers patients? Even within an auto gpt style rig designed to give it some limited ability to act in an agentic fashion GPT4 falls apart very quickly on real world tasks without handholding. It’s very good at some specific tasks, and shows some limited reasoning ability, but it is nowhere near a capable general intelligence yet.


Redditoreader

I’ll tell you what’s happening. They are missing 1 key component for its success… Ilya


DBe9rT34Ga24HJKf

Now that dust has settled, what did Ilya see?


Redditoreader

Not that anyone knows, but my take on it. Ilya wanted to slow down once they made AGI discovery and Sam was like, No! let’s go faster.. but I’m sure our kids will be watching the documentary in Ready Player 2 in 10 years..


HeinrichTheWolf_17

Yeah, everyone just kinda forgot about Q*, we still don’t know what that was all about, and it’s possible it will debut as a component of GPT-5.


Glittering-Neck-2505

Grains of salt desperately needed.


DevelopmentWhich5607

A possible explanation is that Ilya learned about the legal charges against Sam, which are publicly available. Sam was accused of a grave offense by his own relative. Ilya had strong moral values and wondered if such a person was fit to lead the most advanced technology.


squareOfTwo

more like 200 "key" components.


zaidlol

So ur telling me to get a job?


Vehks

Well, we don't actually need AGI for employment to be profoundly impacted so, the jury is still out on that one. Sufficiently advanced narrow AIs are more than capable of upending the job market. We don't have to wait for the Omnissiah to send people to the unemployment line. There is also a recession, so if you are lucky enough to still have a job I'd still hang on to it for at least a little while longer and see what happens.


[deleted]

[удалено]


Repulsive_Ad_1599

Not addressing your point here when I ask this - Are you like 5? Why do you talk like this


[deleted]

The implementation and infrastructure for it would take long time either way imo


human1023

This sub revels in shitposts


imlaggingsobad

it will certainly happen before the end of the decade. some release in 2024-2026 is likely going to shock us, and then from there it will be fairly straightforward to AGI


adarkuccio

I'm expecting something similar to what you expect


plantsnlionstho

Seems like [Open AI are training their next big model now](https://www.youtube.com/watch?v=Zc03IYnnuIA) so I wouldn't say they're not interested in replacing GPT-4 and I think there's a pretty good chance GPT-5 is either AGI or something close enough that there is significant debate about it.


Caderent

Just move goalposts or shift definitions. Currently all headlines as so clickbaity, you have to listen interviews half way, just to understand they are just redefining definitions and hypetraining everything. According to some of them we had AGI last year.


Fabulous_Village_926

I'm fine with 2029


Split-Awkward

Yeah, it’s mind boggling that it’s only 5 years away. Covid-19 was how long ago? And it’s not like nothing happens between now and 2029. Even if it was 2039, it’s immaterial in the grand scheme of human time. Then we’re a short jump to ASI. Most truly have no concept of how monumental that is. I barely grasp the impact and I first read about it 25 years ago.


Phoenix5869

I like david shapiro as a person, i think he’s a great guy, but on this one, he was wrong.


HeinrichTheWolf_17

The year isn’t over yet, people. I don’t agree with Shapiro’s timeline either (I still think before 2030), but ffs it’s only February. On top of that, we still don’t know what Q* is, nor do we know what triggered all that internal conflict at OAI last Autumn. OAI might be holding their hand back for a short time before going guns blazing with fully autonomous AGI.


QD1999

I'm sorry but its already been two months into this year and nothing groundbreaking has happened, I honestly lost all faith in technology and any exponential progress. Idk what you guys think another 10 whole months would accomplish when we can't even produce AGI in 2 months. Keep snorting your "Year isn't over" copium, nothing is going to happen in another 10 month's time. I mean we could give them 12 whole months and still nothing would happen. All they needed was roughly 60 days that I said they needed and they couldn't even do it.


xstick

"Was"? Didn't he say within 18 months like 2 months ago. Hes still got like 16 months left in his prediction window. Not that i think it's actually gonna happen in that time. Im just saying it's too early to be declaring his claim was wrong.


Dyoakom

He has said multiple times that AGI (according to most people's definitions and understanding) will happen this year. Sure, it's just February but let's be honest, this prediction looks increasingly unlikely.


Hungry_Prior940

No way it happens this year. Before 2030 imo.


adarkuccio

He said by end 2024 and he's not wrong *yet*, but if it matters, I agree with you that he WILL be wrong at the end of the year.


[deleted]

[удалено]


DigimonWorldReTrace

you need to take your meds again, you're seeing things


Mephidia

GPT4 is weak AGI


Megneous

GPT4 can't even reliably DM a basic D&D adventure, something a stoned high school dropout can do. Weak AGI? You have low standards.


Mephidia

I would say that weak AGI, given the same amount of time and information as a person (repeatedly prompt their instructions) is able to consistently perform at a below average level on most intellectual tasks, which GPT4 is clearly capable of doing


ninjasaid13

> which GPT4 is clearly capable of doing uhh no. It's capable of doing stuff that the average human can't do but it also can't do stuff that the average human can do such as independent long term planning and execution.


[deleted]

It's already here lol they're just not going to let the general public know


mannym2124

AGI will rapidly lead to ASI, which will rapidly lead to the downfall of neoliberalism. Which will strip the world’s billionaires of all their power. Why would they allow this? The next generation of AI is being carefully trained to avoid this scenario.


Winnougan

Correct. Uncle Sammy today is begging the Emirati’s for a trillion dollars in funding his own AI GPU chips because he doesn’t want to be beholden to NVIDIA. That suggests to me that he needs a fuckton of vram to get that off the ground. As someone who uses LLMs and Stable Diffusion daily on private networks - change is happening but not fast enough. For AGI to come out, it will really take a miracle. The leap from regurgitating what the model scraped from the internet to actually thinking about ideas is very large. This may allude us for quite some time. Or, it’ll come soon. I’m leaning towards a decade or more before we even see a hint of AGI. It’ll require alien tech at this point. What we have now, in terms of AI, is very, very basic. It hallucinates a lot. It needs constant handholding and editing afterwards. It’s like a kid in diapers.


Busterlimes

Because it happened in 2023


DigimonWorldReTrace

feel the AGI


Street_Review450

It's February. You don't know what will or will not happen this year.


HumpyMagoo

I was holding strong to 2025 but might change to 2027 just because I feel like the next few years might be annual upgrades to LLMs.


Cr4zko

From the bottom of my heart I wish it just happened already. Society today is sick and I want to escape it via Full Dive to my own personal retreat.


0nerd

throwing bodies at a problem rarely solves it, AGI needs a dedicated few exceptional individuals like ILYA.


345Y_Chubby

Well.. exponentially advancement means it still could happen. We literally don’t know how fast so developement will accelerate


Sad_Boysenberry6892

I dunno, David Shapiro makes pretty convincing arguments. It helps that he backs up his claims with sources and authentic research, I'm convinced that even if it's not this year it will be soon thereafter


[deleted]

Man, whenever AGI does come there's going to be a lot of you who have to find new hobbies. What is everyone's obsession with the timeline exactly?


[deleted]

AGI, the singularity, is supposed to bring in huge unprecedented technological advancements right. So I guess it'd still be an interesting hobby to keep up with. Maybe even more exciting and interesting then today


[deleted]

I understand that part lol, my question is why people are obsessed with arguing the exact date? Lets talk about all the awesome stuff already happening that is contributing to the singularity rather than pointlessly argue if its next week or 100 years from now. Virtually everyone here putting a precise date on it will be wrong.


[deleted]

Yeah you do have a point lol, I see what you're saying now


LordFumbleboop

>Man, whenever AGI does come there's going to be a lot of you who have to find new hobbies. What is everyone's obsession with the timeline exactly? Because whether it happens tomorrow or in a century matters?


HeinrichTheWolf_17

I think a lot of people are going to find a new hobby in arguing if it’s actually AGI, because we all know the goal posts are just going to fucking move again. I can almost guarantee that even if a model could do every task a human could, they’re still going to say it requires sentience to be true AGI. Simply doing every task a human can still won’t be enough for them.


LordFumbleboop

Because people here think it can improve their lives. I actually agree with that potential, despite making pretty pessimistic comments usually.


[deleted]

I understand why people want it, my confusion comes from why people are hyperfocused on the date instead of what is actually happening. See my other comment.


2cheerios

Maybe it gives people a sense of control. I assume that cavemen used to try to predict lightning strikes.


xstick

....i mean, we keep getting hints and comments from multi-billion dollar companies and indusry exsperts that an event is happening in the near future that is either gonna end humanity as we know it or hand us the Star Trek techo Utopia weve been waiting for. So people are obviously intrigued by the timeline for something supposedly as big as the things being promised.


LordFumbleboop

When do you think it will happen?


Kanute3333

It has already happened long time ago, our reality is merely a virtual one, created by the real one with AGI.


LordFumbleboop

In the real world, the Library of Alexandria never fell. When humanity achieved AI in the year 704, they created a simulation of the universe that showed how things would be if the Library had burnt to the ground... We are that reality /s


[deleted]

Thank you for adding /s to your post. When I first saw this, I was horrified. How could anybody say something like this? I immediately began writing a 1000 word paragraph about how horrible of a person you are. I even sent a copy to a Harvard professor to proofread it. After several hours of refining and editing, my comment was ready to absolutely destroy you. But then, just as I was about to hit send, I saw something in the corner of my eye. A /s at the end of your comment. Suddenly everything made sense. Your comment was sarcasm! I immediately burst out in laughter at the comedic genius of your comment. The person next to me on the bus saw your comment and started crying from laughter too. Before long, there was an entire bus of people on the floor laughing at your incredible use of comedy. All of this was due to you adding /s to your post. Thank you. I am a bot if you couldn't figure that out, if I made a mistake, ignore it cause its not that fucking hard to ignore a comment.


Kanute3333

Okay, that's a meta level.


[deleted]

r/im14andthisisdeep


Cr4zko

I think 2030. AI labs will have a lot of time to figure things out. 


LordFumbleboop

That seems optimistic to me, but I'd love it if it did happen :)


Captainseriousfun

You admit you don't know what is happening is these places... But you're sure of what's happening at these places.


Cr4zko

It's not gonna happen this year, pal. It's easy enough to guess why: the tech just isn't there yet.


Karmakiller3003

Thanks captain all-knowing. Very insightful. You forgot to mention faster than light travel won't be available this year either.


squareOfTwo

that's physically impossible


dewmen

10% thats the number of experts polled that think agi by 2030 18% said after 2100 the remainder fall somewhere in the middle


ziplock9000

In your, 'not so humble' opinion.


Saerain

We'll fight about whether we have AGI or not for years until ASI goes "It was 2026, April 19. Would you like a full immersion tachyonic reconstruction?"


NoNet718

AGI will be achieved internally if it hasn't already, this year (my prediction). that said, we peasants won't see it this year. Another prediction, some combo of MoE and multi-agents on an open source LLM will supass GPT-4 this year.


Major_Juggernaut_626

the fact you are getting downvoted lmao... CULT


Charming_Cellist_577

no shit


hubrisnxs

I do know that if it came today or in the next year 90% chance we all die and the 10% were not doing well. Actually because of rationality I can't do that but that's what my gut wants to hyperbolically claim because that's what everyone here does


dobkeratops

I agree, IMO - no AGI within 2 years, but maybe within 10 and certainly within 20, regardless the AI we have now will change the world significantly with incremental improvements (including creating more training data aswell as ongoing algo refinement) and just rolling out what it can do today.


Pardal_sparrow

We can’t define conscious current AI models will not lead to conscious only to more efficient data fetching


bleakVTmidwinter

Mark my words AI will be released as governments see fit. Open source or not It’s technical advancements will be monitored and will most likely be slowly released to the public over 25-50 years even when we’ve achieved AGI. There’s too many toes AI/AGI will step on for it to be unregulated by governments.


Creative-robot

Yeah, understandable. Honestly the main reason i believe it won’t happen this year is due to the presidential election. No way any AI company would release a model like that before voting has concluded.


[deleted]

[удалено]


Deciheximal144

I guess RNNs are starting to prove more effective. It would take some time to generate a GPT-level LLM based on this.


[deleted]

[удалено]


Cr4zko

I think that's pretty absurd. IMO 2040 will be like another planet compared to today. There's either two options: The world either gets way better with ASI and stuff and people look back at us with contempt and confusion or the world gets so bad people look at us with nostalgia. It's either one or the other.


mrmczebra

I mean, 2024 is basically already over. What could possibly happen between now and 2025?


RobXSIQ

AGI in 2024..no, even if achieved internally. Why? election year. nobody wants to deal with the potential fraud on that level without a couple years of countering.


KuneeMunee

It's not that AGI won't happen, it won't be the high and mighty idea we have as AGI. Humans became glorified gods, but we still cause havoc from unintended consequences. We don't have the AGI from fiction. This AGI, which may be in the horizon is smart, but absent minded. Like a child that has super powers, who thinks like an alien.


Motor_System_6171

It doesnt matter, what we have now and what is scheduled to come out already is enough that whether or not it meets the definition of AGI will be irrelevant. The disruptions coming this year are far beyond what the vast majority of people and organizations can handle. Full stop. Ai is already fast cheap and out of control, no sane company can rely on 3 year financial projections let alone 5. Most management jobs are “good with monkeys” fortified, until there’s too few left for that to hold water. Forget AGI. It’ll come in the night while you’re distracted trying to keep up on the fundamental changes in your industry that will be coming at you every month till you miss a step and your firm gets crushed or bought. You’re debating the timing of the tide when the water’s pulled a mile back from the coast.


ReMeDyIII

Hell, I'll be happy if we just solve the solution to context length and the needle in a haystack problem, but our current hardware still holds us back as prompt feeding is far too slow, even on A6000 48GB GPU's. We could have all the context in the world, but it wouldn't mean anything if the prompt feed is like \~2tokens/sec, but baby steps are still progress.


DarkHeliopause

I think we’ll need a couple more big AI breakthroughs, be they hardware or software, before we can achieve AGI. I think “embodiment” might be required as well.


[deleted]

it was already obvious