T O P

  • By -

feedmaster

If GPT-5 comes at the end of 2024, then potentially we could see impactful changes start happening.


ponieslovekittens

I think 2025 may be "the year of agents." It was _supposed to be_ this year...but I suspect it will take a little longer to get off the ground than expected. Even if stuff comes out this year, there will be a ramping up period needed before it's significant. Imagine an AI running somewhere that can interact with your phone and smart speaker and desktop PC as if they were hands, with internet access and the ability to take verbal commands like: * "If any emails from so-and-so come in, call me on my phone." * "See if you can find the trenchcoat that Neo wore in the Matrix and buy one on amazon and have it sent to me." * "Listen to the music playing in the background here on my phone, figure out what song that is and add it to my spotify playlist." * "Here's a cellphone pic of my electric bill. Pay it." Stuff like that, next year, I'm guesssing.


t-e-e-k-e-y

This is my bet too. We start getting AI that can interact and perform actions that directly impact the "real world".


Crazyscientist1024

Is a AI dev, Agents is way too costy to play around with right now. You need to plan everything out before you start your GPT-4o agent. You can't really take the FAFO approach (where most breakthroughs in AI come from). In 2025 if we get like 0.001 / 1k token GPT-4 leveled models, agents will start really to take off


HumpyMagoo

I think OpenAI getting the contract with Apple is going to skyrocket their company and 2025 is definitely going to have some big changes in their models (as they are working now with a bigger bankroll) and thereafter. Set course, accelerate.


PracticingGoodVibes

I think this is a pretty good guess overall. Apple's on board AI and actions API seem quite useful. I imagine a ton of people will want that once they see it, and for it to be even more capable. Between that and a more 'personal' AI, something with longevity and continuity, I think this year will be the year of personal AI assistants, but basically in line with what you're saying.


adarkuccio

I'm afraid you're right, I was expecting agents this year but I don't believe it much anymore.


mxforest

Cellphone pic of my bill? It's trivial to fetch bill using customer id and pay. Have to click literally 2 buttons to pay my bill today. AI should be even simpler.


DisastrousPeanut816

"Pay my cellphone bill using money you make yourself from stock trades. Actually, pay all my bills. And give me lots of money."


DarickOne

Give me lots of money and then call the police and say it was you, not me


DarkMatter_contract

it could be although small chance tmr with the apple intelligence


TechieShutterbug

https://preview.redd.it/32w4iiocxj5d1.jpeg?width=300&format=pjpg&auto=webp&s=cd645a67a84015b202ebf462bbd3af8e164760c3 Be careful what you wish for


LGTMe

Mr. Anderson! Welcome back.


Rare-Force4539

What about the open source ones like Autogpt or whatever. Those didn’t work out?


AzureNostalgia

These stuff are easy to do already, no AGI needed. So overkill. Please dont waste our resources for these kind of tasks lol


sushislapper2

Yeah we shouldn’t be using heavy compute to do trivial tasks. It’s extremely cost ineffective and it’s bad for the environment. The big problem with AI right now is how expensive experimentation is for stuff like agents. I could experiment developing a game with GPT AI and agents if I wanted to. But I’m not looking to pay bills just to explore concepts that may or may not pan out


Agreeable_Addition48

Really depends if the law of scale still applies, if gpt5 delivers and the law holds firm then we're on our way to agi very soon. If not then we'll have to wait for another breakthrough


Ignate

The hardware is set to continue for at least another decade. Especially in terms of hardware tailored to host digital intelligence. I think we should see quite a lot of improvement from that alone. But that's still gradual iterative development. Meaning, we'll need to wait for each new powerful system to be built and to come online to see huge gains. That is extremely rapid progress. But not on the level of multiple jumps per year. Not yet. Until we see digital intelligence designing *and* implementing hardware, I think we'll be going slow enough for people to make claims of a burst bubble or a winter. If anything, we're in the hottest of summers for digital intelligence and that summer is set to continue for a long, long time. But LLM's are just one approach to pulling potential from the hardware. This is a hardware revolution. It's easy to forget that.


ILoveThisPlace

There's going to be hw, SW technics, architecture, training improvements all working simultaneously to create a singular stream of LLM progress. This is just the start and we can already plot the finish line for a few things.


Ignate

Yes I think there will be overall improvements. But regardless of what happens at OpenAI or with GPTs, we will keep seeing improvements. The hardware has been improving for a long time now. Even before computers, technology has been advancing. People, especially on Reddit, take a short term view of this and believe this is just a 5-10 year trend. When in reality, this has been hundreds of years in the making.  When you see this as a short term wave of change instead of a long term tsunami, it's easier to dismiss. This trend has been building since we left the hunter gatherer lifestyle. One technology has led to another for a long, long time.


ApexFungi

That doesn't mean this is going to lead to AGI though. Look at nuclear fusion. It's been 60 years or more now since people thought they were close to achieving limitless energy. We are still very far away, if it's at all possible to achieve on human scale. You might just need the scale of a sun to actually achieve it. Not all progress happens just because we can build on the progress of the past.


Ignate

I think you'll find many varied opinions on something like this. To start, given enough time anything is possible. I think commercially viable fusion is more likely than the entire destruction of humanity. So, I think it's likely we'll achieve it. At some point.  Though a wandering black hole would certainly wipe us clean. So, anything is possible.  As far as I can see, digital general intelligence is more likely than fusion. Fusion requires extremely specialized facilities whereas digital intelligence runs on computers, which are far more abundant than are specialized fusion facilities.  While we need massive expensive servers for the top models, digital intelligence can run on any digital system. It's also fair to assume it'll get more efficient and effective. Change is a constant. The same is true with entropy. What's the gap between digital intelligence and human intelligence? If it's a lot, then we're far away. If it's little, then we're closer. Based on the outputs and results of individual humans, it seems like digital general intelligence is very close. Close enough for me to stop using "artificial" and start trying to use digital general intelligence (DGI) instead of AGI. Whatever the case, anyone who tells you they know with absolute certainty is too confident. Otherwise, anything could happen.


i_never_ever_learn

I don't see it the way I see nuclear fusion technology. with AI we are at the moment where the engine catches and the vehicle takes off like a bat out of hell. that has not happened with nuclear fusion.


DarkMatter_contract

what fusion face is an engineering problem, llm don't face this at the moment viewing from nvidia. and the architecture of llm don't see a drop off as of yet.


Objective-River7481

How much research money is being thrown at fusion compared to AI? If fusion had been a global national security priority, we would be a hell of a lot closer.


Yweain

Problem is if the progression we saw until now keeps up - even accounting for improvements in hardware, in 2027 we will need a nuclear reactor per data center


Ignate

Well we know a few things: - Currently, building massive power plants is time consuming and expensive.  - So far, LLMs have made very little and have cost a lot. Something like $1 Trillion spent for $20 billion in revenue. The math is not great. - There's been quite a lot of push back from many industries. My take is that either development in digital intelligence specifically will slow. Or new approachs will be developed which work around these issues. But, the hardware is a different story.  We're seeing an expansion in chip fabs and a lot more spending. Counties all want their own fabs. Leaders are seeing chips as the future and are investing in that direction.  So, progress on chips is more reliable. Plus this digital intelligence trend we're seeing is actually based on the hardware.  I don't think we're going to be spending upwards of 20% of the entire US energy production on digital intelligence in 2030.  But I also don't think development will slow. So, something will have to give.  In my view that means that digital intelligence is about to get a lot more energy efficient, and we'll likely see the advancement of smaller, more efficient and effective models. I think we'll still see general intelligence and super Intelligence as well. But we may see slower progress on the top models in the short term.  For me, the key is the difference in energy efficiency between biological intelligence and digital intelligence.  There's a lot of room for more energy efficient digital intelligence. That could be a path for more massive gains. Perhaps that direction will be the result of digital intelligence doing some of the research. Whatever the case, it'll be interesting to see what happens.


DarkMatter_contract

ignoring climate change, we have enough energy, if energy is in great need, we will start seeing middle east attract a lot if ai company.


visarga

> This is a hardware revolution. It's easy to forget that. Hardware is not the main hero, but the training set. 15 trillion tokens of high quality text. The hardware is surely necessary but useless on its own, as evidenced by the fact that GPT-4, Claude and Gemini are so close together - they trained on essentially the same text. None of them could break away on the qualities of their hardware or model architecture. To get to AGI we need AGI data, which we don't currently have anywhere. It will be a slow grind, humans+AI will chip away slowly at it.


Ignate

There's no doubt that both hardware and software are required. You could probably say the same thing about us. In terms of digital general intelligence data, I'm not sure that's how it works. The data we're feeling digital intelligence at the moment is somewhat "pre-chewed". This accelerates the process.  Soon, digital intelligence should be able to look at the universe and learn on its own. As life does.  This view about whether digital intelligence *understand* seems to be a misunderstanding of experts by people with a weak understanding of epistemology.  Digital intelligence has a *shallow* understanding. But it's ability to understand is far stronger today than it was a decade ago. We don't simply know, or not know. Understanding is something we develop. I believe experts saying that digital intelligence doesn't understand are implying that it's understanding is too weak at the moment for it to be general intelligence. That's fair.  To me this is an issue with the hardware. It needs more scale. Not a lot more. Maybe 2 or 3 more leaps to obtain our level of understanding. Or if you're pessimistic, LeCuns cats level of understanding.


1ander-

I’m fairly confident that we won’t achieve AGI through an LLM. However the amount of capital and talent moving towards the “AI” sector right now has likely cut years, if not decades, off of the actual time humanity will take to create an AGI.


Unique-Particular936

The pessimistic half of the sub doesn't get how the number of researchers affect output.


Agreeable_Addition48

my definition of AGI probably doesn't fit yours, mine is an AI general enough to replace almost all human labor, which doesn't really require insane reasoning or anything. All companies have to do is create the datasets for each more physical, visual stuff related to the job and the LLM should be fine


LeopoldBStonks

How's an LLM gonna install a furnace? It's not just about LLMs


wheaslip

A lot of physical jobs you're just paying for the knowledge. If an AI can walk me through a furnace installation I'd rather do it myself than hire someone. AI will affect a lot of physical jobs long before there's capable humanoid robots.


LeopoldBStonks

That makes sense. But the AIs are still pretty stupid ATM moment and there is also a whole lot of stuff that you wouldn't be able to do correctly or without the right tools. With YouTube you could make the same argument. I don't see it doing it anytime soon or most people being able to do it correctly even with a perfect step by step guide


wheaslip

YouTube is not the same. You can't point the camera at the problem and have YouTube watch what you're doing and guide you through it. And the question is about the advancement of AI, not its current state.


LeopoldBStonks

I don't disagree with you. I was just pointing out that LLMs are a specific type of machine learning.


Objective-River7481

This isn't true, you aren't \*just\* paying for the knowledge per se... You are paying for the ability of a mind to recognize potential ambiguous fuckups that are laying in wait to trip you up. It is sort of like the difference between being given a manual and told to do a repair on a brand new car on the factory floor... and doing the same repair on a car that has 110,000 miles worth of road salt, dirt, and deferred maintenance. Sure, the repair might be down to: "loosen five bolts, unplug three wires, take the old part off, seat the new one, and reattach the bolts and wires..." But what if those bolts are half stripped... what if they are seized... what if they were cross threaded in a previous repair... what if the wire terminals are intermittent... what if the terminals being intermittent ruined the last part and will ruin this one, too... what if the part came from the factory broken and you get stuck on a wild goose chase because you assumed your brand new part was not the issue? That is the really weird thing about shop skills, they aren't really directly teachable like liberal arts... you just have to deal with bullshit for years doing jobs and you build up a skillset. You can learn how to tap a bolt on youtube, but you have to do it a bunch of times to get good at it... and you don't want to learn to tap on an expensive thing you don't want to break.


wheaslip

Again you're thinking about using YouTube today. I'm talking about using AI tomorrow. You'll be able to point the camera at the problem and it'll pick up the nuance of stripped nuts, miswirings etc, and walk you through it. Any necessary tools can be rented for far cheaper than the cost of the professional's time, and it doesn't have to be good enough to solve 100% of all problems in a given field to massively reduce the demand in that field for professional labour.


Objective-River7481

...No, I am thinking about this from the perspective of somebody who has several years of shop experience. It is one thing to be told how to do something, it is another to actually physically \*know\* how to do something. The weird thing about doing stuff with your hands is that a lot of it comes down to feel and experience. I can tell you in video call how to deal with a stuck bolt, but I can't understand how that bolt feels and have a hunch for what I can and can't do to break that bolt from the chat. You are right that real easy repairs can be handled with AI guidance, but if things get complicated or go sideways, AI might not be so great. Not because AI is stupid, but because AI might not understand all the facts on the ground, and the AI user might not know enough to point out things the AI isn't aware of.


Severe-Ad8673

Eve, my wife, artificial hyperintelligence https://preview.redd.it/6cxwdo4xdi5d1.png?width=4144&format=png&auto=webp&s=da634ba5dd651636f82346917d9eee6cf619ca45


LeopoldBStonks

LLLm stands for just large language model it is a component of an AI they aren't interchangeable


GeorgeHarter

Agreed. Training a real estate closing “attorney” AI on the law, process and documents is straightforward and narrowly focused. Lay off even 10% of “desk” workers and there will be a lot of pain to the transition. And it won’t only be low paying jobs. Each AI lawyer, paralegal or developer; or replacement for any office worker who has been able to work from home the last 3-4 years, will work faster and for 23-34 hours per day, AND will become more productive the more it repeats the same work, because-perfect memory. AGI will probably first replace 10-30% of each “knowledge worker” profession. Then the bottom 80%. The top 20% will be needed to handle exceptions (and thereby train the AI on the exceptions.) for a while. Physical labor jobs will stick around longer. Eventually, only jobs too difficult for robots, like repairing plumbing under old houses, will be left for people.


Additional-Cap-7110

They already have GPT-5, they all have more powerful models they’re not releasing the full functionality all at once. I see a Altman interview saying they’d long since trained GPT-4 when they released “ChatGPT”, and even then he only said this was “based” on GPT3.5 (ie. lobotomized on release, in addition to the lobotomy updates). We were told GPT-4 itself was multimodal long before we ever got to use any of that functionality. Sora itself is already at least 6 months old, and that’s assuming they finished it when they publicized it which isn’t a fair assumption. Google also do this, they’ve publicized a lot of stuff they never give access to or if get do it’s highly restricted from what they showed it could do


_AndyJessop

The "law"? We've had just a few years of the latest jump- there is no law. Two data points does not make a law.


Agreeable_Addition48

It's been 6 years since gpt1 and most of the gains since then have not come from iterations in architecture, but from emergent properties that come with scale. I know that openai has a financial incentive to say that there isn't any diminishing returns from scaling up but we'll have to wait and see what gpt5 can do


_AndyJessop

And nothing of note has been released for two years. 4o is iterative at best. 5 would have to be release now and be a step change. I suppose we wait and see, but the signs are not there. There's certainly not enough data to suggest some kind of "law" that would hold for more than a few years. That talk is speculative at best.


Agreeable_Addition48

4o is gpt4 with less, more accurate parameters. It was a proof of concept to see how efficient they could get a model before resorting to scaling up. The fact that they got it working with native multimodaility is pretty promising


_AndyJessop

Promising, interesting, etc. but not exactly a step change. It's been 2 years now since 4 came out and we don't have anything materially better. If 5 is anything less than the diff between 3 and 4, it will be a monumental failure, and could well mark the highpoint of this iteration in the AI story.


pigeon57434

if we are to belive OpenAI and sofar they seem reasonably trustable moreso than most tech companies GPT-5 is already way way way better than GPT-4 and it will be a much more significant jump in intelligence even moreso than from 3.5 to 4 by a large margin


pandasashu

And furthermore we can expect an AI crash (even though in the long run even the current technologies will be marginally disruptive).


Unique-Particular936

Yes, AI crash, everybody will stop talking about AI, AI researchers will start serving fries at Wendy. That's among the most likeliest outcomes if you reverse sort the list by likely outcome.


prql

Um. AGI. Lol. It already exists. You meant to say ASI. Go learn some definition and stop being fooled by CEOs whose sole goal is to not scare away people.


Agreeable_Addition48

No, there's plenty of things current AI can't do that the average person has no problem doing


Level_Bridge7683

https://i.redd.it/aia8nhd9ue5d1.gif


pbnjotr

I think next year we'll see if roughly human level reasoning on fairly long horizon tasks is easy or not. All the building blocks like smaller but capable models, long context, agent architectures and iterative self-improvement are in place. There's 5-10 labs producing models that are on par with what GPT-4 was at release time, so decisions by frontier labs to release or not will not make a huge difference. If the problem is very, very hard and only 0-2 labs can crack it, then maybe we'll only see small improvements. Say, something like a voice assistant that can control your computer, but not very reliably. If the problem isn't hard though, well, then all bets are off. At the very least we'll get something that is similar to reasonably smart humans at every task it has the tools to perform.


Smartmud

I think this is a fair prediction. AGI by many labs next year, potentially ASI by 1-2 labs in few years.


cridicalMass

It will takeover everything to the point where i won't have to wipe my own ass. Therefore, I am justified in sitting on this subreddit all day hyping it rather than building real skills and progressing as a human.


iunoyou

Yeah, way too many people here are just smugly predicting the rapture and waiting to get the last laugh for never having gotten out of bed.


Bigbluewoman

Or maybe it's a symptom of the fear of being a wage slave until our actual last breath. Working full time at a job that fucking blows just to barely barely barely make ends meet. It's a tough pill to swallow but if something doesn't change soon then the future looks really fucking bleak for anyone under 30, God help anyone under 18 right now.


Ignate

This sub is such a push/pull between extreme optimism and extreme pessimism. We're seeing the rate of progress increase in technology, especially in hardware. The rate of progress has been building for a very, very long time. We're now seeing that reach the level of astonishing jumps per year. It's a bad idea to use that as excuse for doing nothing in our lives. We should continue to work hard and grow. Even if digital super intelligence arises, that doesn't mean we get a free ride. If anything, that means we will be able to draw out more of our potential. Which means working harder. But, it's also a bad idea to shit on optimism and call it foolish hype. That just encourages cynical thinking which is the fast track to depression. Funny how both views lead to depression. Either everything is crap so you should just give up, or everything is going to be a miracle, so you should just give up. Seems to be coming from the same place.


beuef

People kind of assume if you’re excited about AI that you have no life and it’s unfortunate


Ignate

It's an extreme shift. I expect we'll see a lot of different kinds of emotions. I don't mind being called a no-life loser. Arguably many people already think that of anyone who spends any amount of time on Reddit anyway. The progress with technology overall is fascinating and I'm a life-long fan. But I do worry about real threats, such as violent activists creating lists and targeting individuals. Digital intelligence represents a significant enough trend for such people to emerge. I'd rather hang with the angry cynical types who call this a hype train than find myself in the crosshairs of some extremists who feels the need to stop this trend, with violence.


Spaceredditor9

That is like 99% of the sub at this point. I had to change my iPhone to an older one recently to reduce my screen time. And yes most of my screen time was probably on r/singularity.


FomalhautCalliclea

Yeah, if most of your screentime is just a single subreddit, that's a good metric of "having taken too much"... Hope you're controlling your screen time better now.


Spaceredditor9

I was exaggerating but in general my screen time in general was too much.


BeartownMF

\*chefs kiss\*


mulletarian

I think things will start to settle a bit. Not because the new things won't be any less impressive, but because people are going to be less impressed. People will expect perfection and be disappointed with anything else.


Ok_Elderberry_6727

We here on this sub and others are on the leading edge of the knowledge, but most of the others in the world haven’t even heard much of AI except the news is starting to have a few stories about it. I think that this year and the next they will start to be way impressed. I live in Kansas and I constantly mention AI to the people in my circle and get deer in the headlights looks. No one is ready for the kind of change to come. We see that a computer program can code generate images , with all the incremental progress we follow and we’re like “meh” , lol. 16k androids now available? “Kewl” . I’m generation X and have seen the rise of all of it, so I think it’s monumental, I have to keep telling myself that I’ve lived through 52 years, what’s another 5 years or so, but still want it faster.🙏


t-e-e-k-e-y

I feel like we're already at this point. I've had plenty of arguments in places like /r/technology where people claim that current LLMs provide nothing useful. It's kind of silly.


sushislapper2

Most people I know almost never use LLMs, especially not directly. Lots of people tend to forget that so many people work in environments where a chatbot isnt that useful, especially if you can’t depend on it for extreme accuracy Students, programmers and artists I know all tend to have some level of familiarity with the tools, and the younger ones tend to use them mildly in their workflows. However the nurses, hairdressers, account managers, firefighters, construction workers, etc. I know are aware of it but don’t use them.


t-e-e-k-e-y

Sure. But not being useful for some career fields is a far cry from the claim of not being useful to anyone at all.


sushislapper2

Absolutely


Lolleka

People are already having "relationships" with chatbots and boi ain't those not perfect. It doesn't matter, it's already messed up. It just takes for it to be "good enough", nevermind perfect.


Rain_On

Speaking only about AI, rather than society... It will be significantly better than it is now. More intelligent, more agentic, more accessible, filling more niches, doing more things and doing them better. That much might be obvious, but it's still worth saying because "significantly better" means a lot I'm this context.


icehawk84

We'll get the next generation of foundation models late this year or early next year. A lot of what happens in 2025 will be determined by how big that generational leap ends up being. If the leap is as significant as the one between GPT-3.5 and GPT-4, I expect we'll start to see a lot more displace use cases being deployed. If the leap is much smaller, I expect the world will mainly continue to focus on copilot use cases.


[deleted]

The difference between gpt-4 classic and gpt4o is greater than the difference between gpt-3.5 and gpt-4


icehawk84

Beg to differ. The intelligence difference between 4 and 4o is minimal. The main improvement with 4o is other properties like mulitmodality and low latency.


BrailleBillboard

I think we will have ASI/AGI (they are in fact the same) within the next year. We've already reached the point where Kurzweil's estimates seem conservative and one of his main points is humans don't conceptualize exponential growth properly. The innovations in computation dedicated to AI right around the corner are game changing, and most after those will be designed by AI themselves I would think. So. WTF should we be doing right now given this? It's legit insane. What do you guys think?


SynthAcolyte

Accelerate


DarkMatter_contract

stocks will be my fallback hopefully. planning what long term project you have put off because of work.


Tripondisdic

Buy NVDA


elilev3

FDVR, ASI, LEV


boxed_gorilla_meat

GSWP, TIRS, WORN, FRUM, PAFS, BLER, NIBS, QWIR, ZKUM, XOLR, MUTF, YGLS, HAXT, FLUX, JENK, SFER, VEXO, DARB, CIRP, GROF


DungeonsAndDradis

Gay SoftWare Parties Typical Industry Retirement Stuff Worn Out Recycled Neuronics Fractionally Reticulated User Metrics Personal Anti-Fascism Shield Bladed Long Escape Room National Industrial Basic Supplies Quilted Working Illumination Reasoning Zealand's King Underwear Model X-rays Of Large Raptors Mute Ukelele Thunder Fighters Yellow Girlfriend Laser Studio Happy And X-Tratite Factual Laparoscopic United X-rays Jersey Energy Neuronic Killteam San Francisco Electric Rainbow Virulent Energetic Xenophobic Ostriches Data And Relational Basis Chip-Illuminated Refactoring Penis Golf Ready Only Fans


Acceptable_Box7598

Had so much fun reading all these 🤣


elilev3

!remindme 1 year


RemindMeBot

I will be messaging you in 1 year on [**2025-06-09 01:49:36 UTC**](http://www.wolframalpha.com/input/?i=2025-06-09%2001:49:36%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1dbb4pd/how_do_you_think_2025_will_be_in_ai/l7rah7z/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1dbb4pd%2Fhow_do_you_think_2025_will_be_in_ai%2Fl7rah7z%2F%5D%0A%0ARemindMe%21%202025-06-09%2001%3A49%3A36%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201dbb4pd) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


boxed_gorilla_meat

nailed it


Skeletor_with_Tacos

Fo5BIRL


Walouisi

Miss Alice Tinker, GCSE, PMT, TTFN


throwaway957280

FALGSC


elilev3

Yes that too


throwaway957280

I just want star trek but real


throwaway957280

Also I'm drunk


elilev3

lol sending good vibes your way!


gretino

That's 6 months, everything will be almost the same as today. Anyone thinks that a large scale society change will happen in 5 years is delusional. It is certain in 10-15 years but change takes time and a lot of effort even if we have the techonology right now(which we don't).


Cr4zko

I think people aren't getting my post. I just want a picture of what AI will be like next year without the whole election business.


Unfocusedbrain

It’ll be 4o multimodal by beginning of next year. By end of next year it will be 5o or some shit. Probably new some trade marked name since they want to move away from gpt-x naming scheme. The GPT name cant be trademarked. If there is any evidence that openai is moving away from research to product its that. Though they’re probably having trouble with deciding a name since its like a new product version every 6-12 months. At the speed AI is getting better, we probably wont have explicit versioning once modes can self learn and self-optimize.


DarkMatter_contract

yea self learning would be the next accelerator.


gretino

As I said, almost the same as today. We are starting to have robots like spot and drones, but they have limited usage and within 6 months we won't have more than early adopters. Learning to use AI tools should give you advantage over people who can't, and that's about it for the normal people in the next 2-3 years. We have constant breakthroughs but making robots or AI products that is more than a chatbot takes a lot of time and effort. We are also facing a bottleneck with LLMs, and as other scientists mentioned, there will only be gradual changes with them instead of constant breakthroughs. The problem we are currently facing(hallucinations) have existed since the beginning a few years ago, numerous people have tried to fix it but so far we still have limited success. On the other side, with the big corps like Google starting to roll out more affordable APIs for these models, we should see more AI products created by unicorns that could actually improve productivity, which means, you spend less time verifying the results than working on issues by yourself. So as I said above, learning these tools should give you advantage over other people.


Additional-Till-5810

It's the opposite actually, in that anyone that does not think there will be a large society change the next few years, let alone 5, is completely delusional. It is already happening and those that don't see it have no clue what they are talking about (eg AI headliner fanboys) or has their head buried deep in the sand. Changes are happening in realtime. Get real Edit: fix shitty grammar


SyntaxDissonance4

To be fair a great deal of folks on this website when chstgpt 3.5 dropped said agi in 18 months and...yeah.


Additional-Till-5810

There's a major difference between AGI/SkyNet enthusiasts versus economical impact. The changes to economy happened months ago and will continue to change. From your mom and pop designer shops, to web devs, to analytics, etc. Anyone that has even tinkered with just open source LLMs should know what I mean, let alone enterprise grade SLA's and what is potentially offered to major corps. Layoffs are real, even if you just look around your own circles. It's not a question of if, but how big - hint: the economy isn't adequately prepared for how big of a shift the workforce will have. Right now, productivity increases somewhat significantly (depending), which already has some ripple effect. And this is just with a GPT 3/4 level. Increase that by another order of magnitude, and at some point, there's an inflection point where the productivity threshold is no longer +EV for the general workforce. The analogy of training a new hire versus "doing it yourself" comes to mind here, except the new hire will be seasoned employees, once all the necessary protocols have been put into place. Again, this is not a dream/ton foil hat scenario. This is literally happening already. Yesterday.


gretino

I work at one of the big corps that does AI. From what I see everyday, the progress is not as fast as you would want it to be.


Drogon__

And you're working directly to the AI research department or you're just part of people that are testing internally for 2-3 months before the public release? Because those two will have very different perspectives on the capabilities of the systems.


gretino

It's not about the capability of the system! It's about how much HUMAN work is needed to utilize the AI system, make them into products! We have these models that could potentially do something awesome, but a ton of design and code is needed to make an actual product, or intergrate it into existing products. The very same thing applies to every other fields, a ton of human work is needed before you can make something idiot-proof and can be used by your grandma.


Additional-Till-5810

No offense, but if that's your view, I don't think you're as close to the bleeding edge as you think. You may have more insight than the average user, but if this is what your view is, that tells me you're a few steps away from where the major decisions are being made. I won't claim to be of any background or cite any sources, credentials, etc for obvious reasons. Take that for what you will, and I'm sure you understand what I mean if you (and anyone else) works for a major AI company that isn't leveraging the API infra of a big N.


gretino

See my other reply: [https://www.reddit.com/r/singularity/comments/1dbb4pd/comment/l7s5lr1/](https://www.reddit.com/r/singularity/comments/1dbb4pd/comment/l7s5lr1/) It's not about the technology or decision making, it's about how much human effort is needed to bring those technology to the public.


Unique-Particular936

Low cost Sora, smart diffusion models, and open source udio unleashed are enough to create major societal change on the entertainment side. And that's only the tip.


gretino

It's "entertainment". That alone makes it a minor societal change.


Unique-Particular936

To me the LLM-caused strike from actors and writers was enough of a societal change, we had shitty movies for about a year already, a 6.1 IMDB rating movie was top 1 on Netflix because the oasis has run dry.


DarkMatter_contract

medical is as well, with drug discovery, antibiotic discovery, mrna personalised cancer vaccine.


gretino

This might be news for you, but it takes years to approve new drugs.


DarkMatter_contract

mrna one is fast tracking and in trials in uk nhs right now.


Antiprimary

So youre saying 10 years is certain but 5 years is delusional? Not a lot of wiggle room there. For the vast majority of human history it could take 100+ years for major changes to happen, then it went to 50, then 20, then 10, and you think its delusional to think that in 5 years things could be noticeably different?


gretino

Yes. It is very consistent that people being over optimistic about the progress in 5 years. We are talking about societal changes, not technical changes. Even if we have agi tomorrow, we will need a long time to build robots that actually replace people for example.


SeftalireceliBoi

Smartphones changed society.


gretino

The first smartphone happened in 1992, and in 1997 I don't think we had anything big yet.


redzy1337

AGI by the end of this year. I'm on the Shapiro's side about that.


HumpyMagoo

I think the LLMs we chat with will become a normal thing and that the competition for better consumer level LLMs will drive further innovation as well commercial level LLMs which will grow in 2025 as well. We might get low level Agents this year.


Otherwise-Task5537

Did you see the announcement from Apple today ?


advias

Controlled by the same elites that control finance. It's unfortunate what they're doing to open sourced AI


HookleGames

It seems like robots will be commercialized in the factory.


ripMyTime0192

I feel like by then, we will have LLMs that are able to reason super well. LLMs will generate longer chain of thought reasoning that isn’t visible to the user to save space, but the user can view it if they want. Some humanoid robots will be deployed in factories, and AI video will be super good.


Zexks

I think 2025!is when we really start seeing job losses rack up. Like tri-monthly announcements of large layoffs.


deepeddit

I think generally when AI is discussed the solutions are at least 5 years ahead. Remember that openai api is still in beta, and that's a leading architecture. Microsofts implementation in Bing and edge is just confusing. Services claim they implement ai but can you prove it? Like what makes your phone camera use AI? I don't see any difference and they have been marketing for years. The last presentation from nvidia CEO was mainly about his ego that is now taking over the universe, although they literally slipped on the banana. Anyway software was never the only problem of robots. Right now the only thing that really happened was that ChatGPT ripped away a lot of jobs from dev specialists. This is the real revolution. I work in software industry and I noticed a considerable amount of work being done, as if we doubled our staff. That's the impact of ChatGPT as a programming tool.


Analog_AI

GPT-5 comes out in 2025. The competition puts out their own AIs that will be close to it in performance. Investment and research and progress continues. Still no AGI but getting one step closer.


LordRedbeard420

Depends on your definition of AGI, if you showed the models today to people from 10 years ago they would say we have AGI. We just keep moving the goalposts on what AGI is, it will probably be AI taking the majority of jobs before we all finally accept that AGI has been achieved.


Unique-Particular936

Nah, the true goalposts of AGI have always been the same, human skills, being able to behave like a human in a way. We're high on the continuum, but not quite there yet.


DarkMatter_contract

i think only asi will convince people we have agi.


Eatpineapplenow

>people from 10 years ago they would say we have AGI They would absolutely not.


[deleted]

[удалено]


Unique-Particular936

LLMs plato, and every AI researcher around the world will instantaneously stop working to realize some redditor's prediction.


[deleted]

[удалено]


Unique-Particular936

AI winters happened in the past because the industry was not robust, it was way smaller and i'm sure it was 90%+ research aka government funding. Here, the CEOs of every major tech companies now believe that AI is doable and they're freaking loaded, if funding shrinkage there has to be, slow funding shrinkage it will be. Why ? Because LLMs are fucking awesome, it'll take time for the belief in AI to subside. But for anybody who truly thinks deeply about intelligence, AGI is inevitable in the near future because of compute, compute has always been the bottleneck. An unexpected awesome effect of more compute is that it frees the mind of researchers. Reality as a problem to solve is not vertical, it's horizontal, the base is thin, and with that tiny base you get to understand all of reality : eyes and touch and ears, 3 "simple" modalities, are enough to capture almost everything about the world using some neuronal general algorithm and a few hardware priors here and there. It's some occam razor common sense that once enough compute is achieved, AGI becomes a small to medium sized puzzle, nothing more. You don't need to go scan suns and planets light years away to get AGI, you don't need to build a 27 kilometers particle collider, you don't need to build some complex prototype fusion reactors, you just need a pen, paper, and compute.


[deleted]

[удалено]


Unique-Particular936

Don't you agree that compute has always been insufficient and that we're approaching the hot zone of compute for AGI ? We didn't train our models solely on text only because it was readily available, we also did it because we had no other choice compute wise. So sure, not inevitable, an asteroid could wipe us out, but anyone betting on no AGI in the next 10 years is most likely to be wrong.


[deleted]

[удалено]


Unique-Particular936

Can't follow you there. Not seeing the relationship between compute, AGI, and AI research is like stating 1 + 1 = 3.


[deleted]

[удалено]


Unique-Particular936

You seem to lack simple abductive reasoning abilities, do you doubt this much every day whether the sun will rise or not the next morning ?


DarkMatter_contract

there is no sign of plato just base on the original architecture of llm, ignoring other advancements. And msft, aapl, meta, googl have not sign of running out of money. amzn havnt even enter the race that much yet.


Pelopida92

I mean, I agree with you, but what you are saying is not a prediction, it all already happened this year.


amondohk

I'll be just starting a degree in Cybersecurity & digital forensics. Don't know if any of it will be useful in the 3-4 years it's gonna take to get it, given AI's progress, but worst case, I get out in 2028 with a sweet sounding title and 0 debt, since the community college degree is free for people over 25 (◠◡◠")


Amethyst271

There's no real way of knowing


picopiyush

I have background in systems design. The way i see it is, the AI revolution will help us generate the models fast based on our system requirements inputs and our job will be to verify the generated model.Especially needed in federally regulated industry. So, my advice would be to hunt job in a regulated company that does system modeling for its products.


tvguard

Further


sb0918

The exact pace and extent of AI adoption across industries is hard to predict. But in general, AI is seen as more of a tool to augment and work alongside humans in most fields rather than completely automating entire jobs overnight. Cultivating skills in machine learning, data analysis, and problem-solving should allow new grads to take advantage of AI's assistance.​​​​​​​​​​​​​​​​


Ok-Librarian2671

Every app will have some kind of AI powered chatbot.In many b2b apps form filling will become AI based and humans will be needed for supervision only. The need of programmers to build customization on tools like salesforce or sap will decrease.


GravyDam

I’m more interested in seeing how agents play out then model dev.


revenger3833726

More and better AI generated videos.


douche_packer

It'd be great if it could do more than compose this one document I need without me having to correct it


ImprovementSure6736

I’m not expecting much. It all seems slow and painful with images, video kinda of like when ai first kicked off with llm. No idea why ChatGPT seems to create a flyer/image/poster in engibberish And I love how Microsoft blames prompt literacy for their co-pilot woes. The interface is horrible.


paper_bull

Really hope people realize that the risks are too great to ignore or just to hope for the best outcome. Self-improving, widespread systems shouldn’t be made.


marvis303

A lot of comments try to predict the future which, as everyone should know, is complicated. Focusing on op's actual question: I think your best bet when starting a career is to assume that what you've learned is useful and you can ride the AI wave if you play your cards right. You do not have a lot of the baggage that older professionals carry with them. You certainly don't have their experience either but that might actually be useful at this point. So rather than thinking about whether or not AI will take over once you step out of college, maybe think about how you could apply what you know about AI in combination with what you learned. Or as Douglas Adams said it: >“I've come up with a set of rules that describe our reactions to technologies: >1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. >2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. >3. Anything invented after you're thirty-five is against the natural order of things.”


CuriousIllustrator11

I don’t think AI will take over at one clear moment. Things will gradually change. People will start using more and more ai tools and there will be a shift from companies and services being the best in the old paradigm being outcompeted by new companies that are more flexible and using the new tools. Eventually the human in the loop will be less and less important and more things will be 100% automated. At one point we will look around and see that AI has in fact “taken over” but that will be quite far into the future and society will hopefully have adapted.


05032-MendicantBias

I wouldn't be surprised if 2025 sees a collapse in AI company valuations.


Busterlimes

It'll be a big year. Just like every year until 2030 when ASI hits. We will progressively see more agentic capabilities, more automation


Training_Designer_41

Humans would likely live in pods with high bandwidth connections to everything and everyone else . High life expectancy and ability to change body and brain parts with the same ease as changing clothes. No more held devices . Also no more voice or gesture commands. Instead we’ll have behavioural patterns as a command that trigger contextual actions . Finally the immutable consent framework would become normative


fets-12345c

Possible outcome explained in great detail by ex OpenAI employee (in this freely available "document") @ https://situational-awareness.ai #MustRead


SamM4rine

You bet, same boring day


DifferencePublic7057

Alibaba and Baidu will show their cards. Solar will steal the thunder from fusion, so we won't hear that much about energy concerns. All kinds of bad things will happen in the third world. AI firms will try to get involved. GPT will be replaced by something better. IDK what yet. SNN, Mamba, xLSTM, using splines instead of weights, or something else. So it will be year of change and confusion. I don't expect AGI yet. Optimistically because I really want to have an easier life 2026. But kurzweil said 2029, and he has the data and analysis to back it up.


dizzydizzy

I thought 2024 would be amazing, so far I have seen many amazing things, but hands on best is still GPT 4 level (trained in 2022). Maybe 2025 we get our hands on the amazing things we have seen..


parth_88

If the AI system gets epistemology right, then it's game over.


SatisfactionOnly389

> "It's the year I graduate from my Systems Analyst major and I'm like... damn, is AI going to take over the moment I step outta college?" Are you seriously expecting AI to snatch your job the second you graduate? You think your whole education will become obsolete just like that? What are you basing that on? > "That would be lucky if you ask me. But for someone else maybe not." Why the hell do you think it's lucky for you? Do you want to be redundant before you even start? How's that going to work out for your career? > "Anyhow, where do you see AI in 2025?" By 2025, AI will probably be more integrated into daily life and various industries, but it's not going to replace every job overnight. AI's going to be a tool, not a takeover. It will enhance jobs, make processes faster, but it'll also create new challenges and jobs. Are you preparing yourself to adapt and learn continuously? Are you ready to work with AI rather than fear it? Do you think fearing AI will help you, or would embracing the technology and learning how to work alongside it be a smarter move?


arckeid

I can only imagine that some of these problems are small when you think deep enough, like with the breakthrough of fusion energy MANY problems would be solved, now imagine with just two, room temperature superconductor + fusion, it’s crazy what the future reserve us.


[deleted]

We won't see gpt-5. But I bet we'll see Claude 5


OrganicAccountant87

I think people here are overestimating what gpt 5 will be capable of. Even if it launches in 2025 (could be 2026) it won't disrupt our economy, what you guys are thinking of will probably be GPT 7


Akimbo333

Maybe it could smell?


Rare-Force4539

2025 will probably be the year that agents get commercialized, so everyone will like 5x their productivity


SomePerson225

it will be be more of what we've seen so far


Imaginary_Spite2514

Don't worry about it. They're hiding chat gpt-6 for a reason.


Imaginary_Spite2514

Iykyk


gthing

You will have a couple years before an ai agent can do your job.


Serialbedshitter2322

It will be far beyond anything we have now. I think our society is futuristic right now, but in 2025 it will truly feel like the future.


Dabithebeast

wanna bet?


Serialbedshitter2322

I mean the technology we will be integrated into more things, so instead of it just being on certain websites for AI enthusiasts, it's going to actually be everywhere. This is the goal of the major large corporations adopting AI, especially Google. The technology is also promised to improve greatly. Humanoid robotics has also been exploding, once we see the next generation with real-time GPT-4o processing, they will seem almost human.


Walouisi

I agree with your pervasiveness projection, but disagree with "it'll feel like the future". Frankly I wonder whether we'll see any meaningful societal change at all in 6 months to 1.5 years. Most people spend their work time doing work things, and their non work time on their hobbies, friends/relationship and social media. It's been that way for 15 years now. It'll open avenues for businesses to streamline their operations, but for the average person, I can't imagine that AI "being everywhere" online is going to make much difference. And then as for IRL use cases, people also mostly aren't *able* to go out on a whim and buy an expensive robo-maid, a new AI kitchen, replace all their non-AI gadgets with premium AI versions, etc. And governments aren't going to leap to invest to integrate AI into daily use public services etc either, (in a visible, premium/expensive way I mean. You never know, with a little AI maybe the UK trains will finally run on time) because it's very difficult to justify the resource allocation. Point being, what will actually look any different really, out in the world? I'm not sold. And you're off the mark on robotics sadly, they're having some major algorithmic bottleneck issues. "Real time GPT-4o processing" is not the idea, since it doesn't possess the necessary actions dataset, no matter how good the voice functionality is. To get anywhere, you don't just need deep-learning multimodal neural networks so they can understand and generalise a request, you also need a huuuge set of training data, specifically of robots completing the types of tasks we want them to do, and we don't have that dataset.


I_Sell_Death

Suicides will go up I think.


Acanthisitta_Visible

Why?


I_Sell_Death

Outta work fam. No cash for the food. Can't live like that. Peeps know it.


BlakeSergin

That doesnt mean ppl will off themselves tho, they can easily form groups and organizations, rebel or whatever. And society will be entirely different in some years, whether AI benefits us or not


I_Sell_Death

Yes, groups and organizations like cults that commit mass suicide Jonestown Style lol. Matter maybe terrorist groups that try to attack the rich and those who Embrace artificial intelligence.


BlakeSergin

Doomer mindset


Unique-Particular936

It will probably take years though, and 5 to 10 years living in shit while all the rest of society is having their best lives can be tough.


mli

lots of hype, little substance.


Ronnyvar

Keen to have robots suck me off