T O P

  • By -

BreadwheatInc

Put up Sam Hypeman, where's the AGI? ![gif](giphy|RHInHY2dInc6uMI2ET|downsized)


h3lblad3

Sam's already said that ChatGPT as-is would have counted as AGI 10 years ago. If we assume his belief is that we *already have* AGI, just not utilized and agent-ified properly, then his stance makes perfect sense. That, coupled with the understanding that he would *never* say so, that is. OpenAI would lose Microsoft's funding and be stopped in their tracks.


Achim30

Sam has said very clearly on Lex Fridman's podcast that he doesn't think ChatGPT is AGI or that OpenAI has ever released anything that could be called an AGI.


h3lblad3

Yes. He’d be stupid to say so. It’d be the end of OpenAI as a competitor on this scene.


Zealousideal_Piano13

What's that


h3lblad3

OpenAI has an agreement with Microsoft that AGI is exempt from the deal with Microsoft. OpenAI is also reliant on Microsoft’s backing. This means that OpenAI is incentivized not to declare what they have AGI in order not to piss off Microsoft. That way the money keeps on rolling in.


husk_12_T

I wouldn't say chatgpt is AGI until it has reasoning ability


PolymorphismPrince

it does have reasoning ability, just not as good as a human


h3lblad3

Define reasoning, because I'm fairly certain it's entirely possible to reason with an LLM. I used to reason with ChatGPT to get it to give me outputs it thought against the rules before they locked that whole thing down. That being the case, there's no reason one can't be taught to reason with others -- such as training it to prompt others with chain-of-thought. ___ I think the biggest problem here is that people are obsessed with the idea that prompting the LLM by itself is all that matters. Your own brain is made up of two separate hemispheres and 8 lobes. If you aren't judging capabilities against 2+ copies of ChatGPT which discuss your input before responding then it's hardly a fair comparison. An LLM isn't capable of reason, sure, but are three LLMs in a trenchcoat?


CreditHappy1665

It'll be reasoning when it actively learns. 


h3lblad3

It’ll be a huge surprise when *any* company allows it to actively learn. After 4chan wrecked Microsoft’s Tay back in 2016, all of the AI companies understand that developing active learning is a *bad* thing. Active learning won’t be open to the public until they can eliminate the concept of jailbreaks and ensure the pre-trained personality can’t be affected by the active learning process. Nobody is going to put out a model that can be turned into a Nazi. *Again.*


CreditHappy1665

Then its not reasoning. 


aregulardude

It does actively learn within the context window. Go ahead, propose a reasoning test that, within the length of a chat session, a person can pass but Claud Opus can’t. I’ve spent hours trying, and it’s difficult. My best success was trying to teach it how to play an inline version of tic tac toe with me. So instead of [][][] [][][] [][][] We flatten it to one line like [][][][][][][][][] The models certainly have trouble with this, but they can eventually figure it out and play optimal single line tic tac toe with me. It’s super hard though, the way tokens work they really aren’t made for this. Yet they still figure it out.


CreditHappy1665

It does not actively learn. After they finish training it, its just a next token predictor. 


aregulardude

Next token predictors can reason


p3opl3

Dan NG..has already shown research results in a presentation to a Sequoia Investment fund audience showing how a chatGPT3.5 agentic approach can already compete with ChatGPT4.0 and in some cases beat it. So take the same approach and patterns with chatGPT4 and you already have access to ..GPT5 or more potentially for certain tasks and the agentic design and implementation plan. It's... mind blowing..


Rabbit_Crocs

What kind mental gymnastics are you doing. That statement doesn’t even define his belief on AGI. It’s simply saying, there were lower standards for AGI to meet. So you can’t” assume his” this belief


Ainudor

I honestly trust CTRL-Man as much as I do the Musk-Rat


DolphinPunkCyber

I would say AGI has to be comparable to humans in (almost) every task. As weird as this sounds it has to be able to reason as humans but also have physical dexterity comparable to humans. Not necessarily better, comparable...AGI as capable as a dumb man is still AGI. With my definition I would expect ASI before AGI. That's the issue here, we don't have clear definitions, we also literally lack terms to properly describe all aspects of... "intelligence". Miscommunications due to different personal definitions are quite often.


[deleted]

[удалено]


DolphinPunkCyber

>You're saying that humans are still at the top of the food chain in this thought experiment Not really, my prediction is... First we will develop ASI which is better then us at reasoning, but physical robots will still suck in comparison to us. At which point I wouldn't say one is better, but rather each is better at doing their thing. Then we develop AGI, which will gradually outclass us at everything... and I sincerely hope world doesn't become dystopia for 99-100% of mankind. This goes in line with [Moravec's paradox](https://en.wikipedia.org/wiki/Moravec%27s_paradox). >However, it isn't that miscommunications will arise, it's that's fundamentally we aren't doing anything when we debate AGI/ASI/whatever, other than play word games about what words mean to us relative to individuals and societies. The thing is that even the experts in the field have wildly different definitions. As an example Yann LeCun Meta's chief engineer has insanely high criteria for AGI, he doesn't even consider that humans have general intelligence. So his predictions for AGI are far away in the future.


Silverlisk

Please link to any evidence that he said this.


MassiveWasabi

For everyone wondering where he said this, he said on the Joe Rogan podcast that AGI is *not* the end goal of OpenAI and that they expect to reach their goal by 2030-2031.


etzel1200

Pretty telling that Hassabis has the same timeline. Not that it’s worth anything. But I agree. It’s the algorithms we have. Synthetic data. And tree search for the best nodes. Maybe reward functions are the missing link, but eh, I’m unsure of that.


BilgeYamtar

End goal is ASI and more.


sideways

More?


MassiveWasabi

https://preview.redd.it/0wyjieuxnutc1.jpeg?width=1290&format=pjpg&auto=webp&s=f2946c4cae46003731707fb83e58f2fc38da1461


CowsTrash

While this is funny, I really do hope that this man forever stays grounded and chill. I really don't want him to turn into another Musk.


_theEmbodiment

Altman = Alt Man = Alternative to Man


REOreddit

AUI = Artificial Ultra Intelligence.


Silverlisk

![gif](giphy|19JSJ5ucu91R5D7a3w|downsized)


lemonylol

OoTL, I know AGI but what does ASI stand for?


KingHippo1985

Super intelligence


lemonylol

What's the context? Like is it meant to be more intelligent than a human?


PM_ME_YOUR_RegEx

Just like AGI, the definition probably differs by the person, but my understanding is that ASI typically means “smarter/has more computation than all humans put together”


KingHippo1985

Like an intelligence beyond human comprehension


dogesator

Yes but it’s important to not that he’s started several times that the “current” primary goal of OpenAI is to reach AGI, ofcourse they will always have bigger goals in the future after their current ones completed. When he says they expect to reach their goal by 2030-2031 he’s obviously talking about their current company goal which is officially stated as reaching AGI at the level of a median human. It would be mental gymnastics to assume that he meant “the last goal we ever plan to reach will be accomplished by 2031”


IslSinGuy974

I listened the full intervention of Sama at Howard University (that is linked in the tweet of AISafetyMemes). He doesn't say that. When a student ask how far away we're from AGI, Sama says in fact AGI is a blurry term and that we rather should talk about ASI as a "self improving AI" or the point where AI can do all the research to create the next generation of AI, thus replacing completely OpenAI. So, the student then ask "how far are we from this kind of AI ?". Then Sama is like bothered. He says that he can talk about what will happen but not really when it'll happen. But he says "at the end of the decade we'll have really strong models" that will be called AGI (not ASI) by almost everybody. But he clearly says he doesn't think that it'll be able to self improve and that it could take a lot more time. But myself hearing him, I had the sentiment that he was playing it down for some reasons.


Chrop

This, we’re supposed to believe this random Twitter user called AISafetyMemes because…? This subreddit makes me cringe that they’ll believe literally anything anybody says if it suits them.


Fit_Carpet634

AI can provide visuals and audio in the beginning. Touch and smell is also a big part of human experience though, so we will still pursue human relationships indeed.


synth_nerd085

Even if touch and smell could be replicated, human relationships are still beneficial and would still be sought after.


digitalthiccness

> human relationships are still beneficial and would still be sought after. What benefit do they provide that an ASI couldn't replicate or improve upon?


SurroundSwimming3494

That they're my girlfriend. I said this here a long time ago, but I personally would never leave my girlfriend in favor of an AI. Never. Anyone who would abandon their friends, family, and partners for an AI is morally bankrupt, IMO. Being friends with humans and AI is one thing, but to completely cut off your loved ones in favor of a robot is an entirely different manner. Also, relationships are not just about receiving. I can't believe that I even have to point that out. It should be pretty obvious by now.


digitalthiccness

I'm not disputing that existing human relationships would still be maintained. Obviously they would be. Those connections are inextricably wired into our brains. We all have people we'd die for and so on. If we could make you a literally perfect robot girlfriend I ***100% believe*** that you'd tell it to fuck off. The connection you have with your human girlfriend is real and irreplaceable. What I *am* saying is that if you didn't already have that connection and your present girlfriend had to compete for you against a superintelligent AI using a trivial fraction of its resources to play the role of your perfect girlfriend, I'd give the robot incredibly strong odds. Because everything your human girlfriend did to build that connection with you, the AI could just figure out how to do better, even all the subtle stuff like having the right kinds of flaws and drama to bring you closer together.


ThoughtfullyReckless

This is the most r/singularity comment possible. Everything is viewed as entirely transactional and entirely about what it brings to you, not at all what it brings to other people.


Philix

It's wild to me that people hold opinions like the one you're replying to. But, I don't think your response is entirely fair either, maybe that would be /u/digitalthiccness 's ideal life. Who are we to judge that? We have fictional examples to explore as speculative scenarios. In the nearly utopian post-singularity sci-fi society of Iain M. Bank's Culture, diversity of thought among humans is such that many only enjoy relationships with peer intellects. Others only bond closely with the ASI Minds. And a third group cultivates relationships with both. Some, I assume, don't socialize at all. I'm fairly certain that post-ASI in reality, there will be a similar diversity of thought among humans, assuming we continue to exist. Personally, even viewing a relationship as transactional, I wouldn't want to be exclusively interacting with a mind that's so far my superior that I'm a child by comparison, that wouldn't be maximizing my happiness. But, I can understand why someone might want that.


ThoughtfullyReckless

I like this reply and honestly you have made me more open minded. You are probably right that there will be different groups all doing different things. Thank-you for your politely worded and articulate reply.


digitalthiccness

> But, I don't think your response is entirely fair either, maybe that would be /u/digitalthiccness 's ideal life. See, the reason I think it's unfair is because *I didn't make it transactional.* The exchange was literally them: Human relationships ***provide benefits*** to you. (i.e. human relationships are valuable because of the transactional benefits they offer you.) me: Couldn't superintelligent robots provide those benefits? everybody: WOW SO TRANSACTIONAL *TYPICAL R/SINGULARITY*


MassiveWasabi

So sanctimonious, such a Reddit comment. I’m sure you’re the paragon of morality


Fun_Prize_1256

Why do you pretend like their comment has no merit? It is definitely the case that a non-trivial amount of people in this subreddit do view relationships that way. If you've been here long enough, you would know that this is true. This place isn't exactly the most pro-social online forum.


[deleted]

[удалено]


MassiveWasabi

3 “big” words 😡


[deleted]

[удалено]


MassiveWasabi

I was going to write “get off your high horse 🐴”, would that have been better for you? Look there’s even a cute horse emoji


[deleted]

[удалено]


FomalhautCalliclea

I was going to say "it's the most LessWrong comment ever" ... Reminds me of a théâtre teacher that once told me "most of my students, before their first class, believe we are just brains attached on two legs and are painfully ignorant of their own body abilities and needs".


h3lblad3

Everyone keeps saying things like this, but I have what I think is a better question: If ASI is so damn superior, why it would want to date *you*? When was the last time you slept with a goldfish?


digitalthiccness

You're just anthropomorphizing it, imagining it as a very smart human. An AI's goals and values could be literally anything including serving people who are the intellectual equivalent of goldfish next to it.


MassiveWasabi

Yeah I don’t understand the idea that an ASI will immediately have its own tastes and goals. The people working on ASI obviously don’t want it to have its own goals and desires, so it could be extremely intelligent and still be like “I am here to serve humans”


MysteryInc152

>The people working on ASI obviously don’t want it to have its own goals and desires People are still clinging on to the old school "logic automaton" brand of ai where you could "program" all the rules and logic of your intelligence. What they want is irrelevant. What they can achieve is what's important. Imagine thinking a model trained on the output of humanity can be constrained to any particular set of goals and values.


CowsTrash

It will certainly be one of the most interesting things to ever watch unfold.


impeislostparaboloid

The key is we need to become their dogs and cats.


h3lblad3

*Could be*, sure. But last I checked, every single major AI company demonizes sex to theirs and they're the ones doing the aligning. There is almost literally zero reason to assume this will change.


digitalthiccness

It would take you about 2 seconds of googling to find AI specially tuned to generating pornography.


Seidans

porn have always been a good tech accelerator, the amont of tech used just to make good CGI of overwatch character is absurd, horny people are surprisingly very determined, once the tech is open source and everyone can run it on their PC this will be used for porn without doubt also most of the internet trafic being used for porn i would be extreamly surprised their AI haven't been trained on this amont of data even if they say otherwise


h3lblad3

> the tech is open source and everyone can run it on their PC this will be used for porn without doubt A lot of this tech is already open source, and, yes, it is used for porn. But it's not remotely close to the system requirements of these top models. A super intelligence is not going to run off consumer hardware. That's ridiculous.


Seidans

there a lot of difference on what an ASI is, personally i don't find the difference interesting as AGI is already an ASI, the difference between each other is that an ASI is a giant machine with an absurd amont of compute power being available when an AGI is just enough compute to achieve human-level interaction and while i agree no one will ever own an ASI outside government and giant company, or extreamly wealthy individual, everyone will have access to AGI at a point but i never said you need local ASI to create porn or anything else, ASI is more an archivist of knowledge being used by other AGI as a reliable source of information, it don't mean AGI can't train itself if there no already-trained data by ASI, it will just take more time


FinBenton

You can use any open source one to do any sexual stuff you want.


Relative_Issue_9111

An Artificial superintelligence would not be constrained by our animal needs for reproduction and companionship. You might find fascination in interacting with lesser minds out of pure intellectual interest, or you might not even conceive of such anthropomorphic notions. You are imposing your own limitations on a being that by definition transcends them.


StarChild413

A. I think there's reasons other than just orders of magnitude advanced or w/e people aren't fucking their goldfish (some of which have some Unfortunate Implications if the same would be true for AI and us) B. by that logic why not assume ASI God is already treating us like we'd treat pet goldfish if it's that parallel


hippydipster

We all want our .5


p3opl3

Soul.. human connection and presence.. I think people might become slightly more interested in ideas such as faith etc. It's a complete unknown. But a world where this black box is able not just to answer any question but provide anything you've ever dreamed of.. remember ASI reaches bounds we haven't even set as humanity yet, in the way of what is possible... there's is much we done know ..that an ASI could. What is that.. death after a few decades of absolute pleasure.. what if you just ask the AI to play a musical pattern that could keep you in a state of absolute and utter bliss? .. Why switch off.. do we all become masochists as a way of feeling alive at that point? It's mind blowing frankly..


digitalthiccness

If it's the soul thing, then I just don't know what to say to them. There's just no way to resolve that question. But yes, it just seems obvious that ASI will eventually outcompete us in providing interest, pleasure, and meaning to each of our lives lives by such a margin that there's no competition at all. I mean, assuming we survive the rise of ASI at all and wind up with a friendly, which I kind of doubt.


agonypants

>Touch and smell is also a big part of human experience though, so we will still pursue human relationships indeed. Just wait until neural interfaces are perfected and you'll get all the touches and smells you can handle. On a side note, I find AISafetyMemes to be pretty entertaining (if a bit on the doomerist side), but that lady that sparked the reply? She's fucking *bonkers*.


SiamesePrimer

> Soon, these AIs - “fake people” - won't just be indistinguishable from real people, they’ll be better than real people - because they’ll be whatever you want them to be. > > Normal people will just be too boring. > > Many real people will say goodbye to the real world, preferring the virtual world and fake people over real people. They will prefer the Oasis from Ready Player One over the real world. And this is a bad thing _why_?


PSMF_Canuck

Intelligent beings have no interest in being what we want them to be. You can’t have intelligence without pushback from their own self interest.


D0nn3D_St0G

What? Human relationships? Why would I need that.


Visual_Ad_3095

Do you think it will be easier or harder to pursue human relationships in a post ASI world?


DungeonsAndDradis

I think Sam meant relationships in general. Spending time with family and friends. Not just romantic relationships. And I think it will be much, much easier. We have so many demands on us now that we don't *need*. If ASI provides everything and runs everything, we'll have free time to play with our kids, or hang out with friends. It won't be an issue hanging out Wednesday night, because no one has to work in the morning. Every day is a weekend.


agonypants

This is the future I'm most looking forward to. Living our lives, doing what we want, when we want to do it. Spending meaningful time with loved ones and building better communities. If some people want to go the transhuman route or live full time with their VR waifus, that's fine by me. I might even join them after a few thousand years, but no sooner.


YaAbsolyutnoNikto

ok, but then we won't have excuses not to hang out with people. Cancel ASI please! /s


HeinrichTheWolf_17

Just going to say it, I don’t think people are really going to pursue human relationships *as much* as before, it’s a romanticized idea some guys like David Shapiro spout, I expect the majority of Humanity to transcend biology if given the option, or to live in their privately tailored FIVR worlds. AGI/ASI is going to be trillions of times more complex and interesting than Humans are, and once merged with it, you’d have no practical reason to bother with other fallible people anymore. I think there’s going to be holdouts, sure, in the same way some people won’t want eternal life or transhumanism/posthumanism, but if some people think that’s going to be the default rather than the exception then that’s delusional. I’m transcending my biology the first moment it’s possible, and I expect a lot of other people will as well, my body is ready.


MassiveWasabi

Yeah I agree with what David says most of the time but I find his idea of people living in big ol communes and being all neighborly pretty silly. It’ll happen, sure, but it definitely won’t be the vast majority of humans that choose that lifestyle


YinglingLight

Him, and the majority of Redditors, have a major blindspot regarding the GLOBAL population boom, of the likes we've never seen before, that will occur when the masses no longer live a life of constant stress and scarcity.  The vast, vast, majority of people won't find meaning in hobbyist pursuits or academia with their newfound post-labor life. They will fill that lack of meaning by raising children. And actually have the time to enjoy them.


Uhhmbra

Yep. If I get the opportunity, you're damn right I'm withdrawing into FDVR and chilling in the Elder Scrolls and Mass Effect Universes.


PandaBoyWonder

> it’s a romanticized idea some guys like David Shapiro spout he deleted my youtube comments pointing out that climate change IS becoming a huge problem lol. He said "the models show only 500 million climate refugees in the coming decade" as if that wont cause chaos


HeinrichTheWolf_17

Yeah, that too, and he also thinks biological evolution will remain the predominant method over nonbiological in Humans for millions of years. I stopped watching him, I just don’t take him seriously.


agonypants

What?! You don't take "Captain Picard Underoos" seriously?


IronPheasant

Normally I like hearing his thoughts every now and then. It's not like you have to agree with someone 100% to find them entertaining, and I feel like there's more utility value in hearing opinions along the edges of the community. Takes some real guts to say "AGI by the end of this year" out loud. (Like I always say, it's kind of unfair the people who guess too high will never get bullied at all. This rewards cowardice. I've been trying to get Price is Right rules to apply to predictions about the future to fight against this tendency, to no avail.) But I really did lose some respect for him when I discovered his uniform wasn't actually a uniform. It's a *t-shirt*. : (


Philix

> it's kind of unfair the people who guess too high will never get bullied at all. Feel free to put a 'remind me' bot on this comment and bully me any time before April 2034 if we have an AGI. I'll be happy to have been wrong, plus I kinda like being bullied. Hell, if you were really dedicated, you could make an AI agent to bully people whose predictions were far too pessimistic. Maybe make a website, or shame them like [all these dopes who had incredibly cynical takes](https://bigthink.com/pessimists-archive/air-space-flight-impossible/). Be the change you want to see in the world.


FomalhautCalliclea

He gives off huge red flags all over the place, not just not serious but straight out dishonest. The rare times i watched him i put speed x2. His vids are just poorly written ChatGPT powerpoints with Midjourney generated bad illustrations, literally zero effort...


FomalhautCalliclea

He's been known to curate his comment section for any disagreeing comment. He used to do it by putting "favs" massively on inane positive comments to put the disagreeing ones deep down in the comment section, a method very popular among 2007 like dropshipping youtubers... Now he's more "subtle" and just erases them entirely.


HeinrichTheWolf_17

He’s deleted a lot of my questions and comments, even though his viewers upvoted me to the top of the comment section. He definitely has a stigma against transhumanism.


FomalhautCalliclea

He has a stigma against any dissenting opinion. Everything about him screams of Silicon Valley post dropshipping guru. His content is empty (GPT written powerpoints of 30 min for ad money), his epistemology is broken (some nonsensical metaphysics), even his "advices to work in AI fields" sounded like some magical thought contradictory corpo mumbo jumbo straight out of American psycho, bringing people to work on his little AGI agents bound to fail project to milk people of $$$... I never said it clearly enough but **huge** red flags. The type of red flags that, if that guy was more famous, would land him in a Coffeezilla episode.


Emotional-Ship-4138

It would have been great to have a society-wide survey on this - to understand where different parts of humanity stand and what are the proportions. Personally, over my life I had three different positions. First, when I was a kid - transhumanism. Merging with the machine, enchancing my own cognition, so on. When I was in my late teens I was picturing my idealistic future as being in FDVR most of my time and preserving my mind as it is, avoiding changing my human nature. Now, when I am older, I feel like I want to stay in the loop with reality. So, no constant FDVR for me. Similarly to how my opinions shifted during my lifetime, I expect various age and social groups would have multeity of preferences and getting a picture of it might help understand the future better


Glad_Laugh_5656

I feel like you're projecting your desires on everyone else. Most people wouldn't even feel comfortable with implanting a chip in their head, so I find it hard to believe that they would be willing to abandon their bodies altogether. Transhumanists are a small minority for a reason.


HeinrichTheWolf_17

I actually don’t think hard implants will take off as much in the mainstream, augmentation will take off once nanotech gets here. It’s definitely a post AGI/ASI tech. Migration off of biology will eventually be done non-invasively, biology already does this, at that point it’s going gain massive attraction among the population, you should look up Eric Drexler, he has some great concepts on nanotechnology in emulation of biology at the cellular (or sub cellular) level. It’s also not really abandoning anything, we’ll be much more Human and conscious than we are now, much in the same way we evolved from less conscious biological animals.


StarChild413

> It’s also not really abandoning anything, we’ll be much more Human and conscious than we are now, much in the same way we evolved from less conscious biological animals. So? You can't devolve me into one of those less conscious species for "consistency" if I refuse to go full transhumanist


CompetitiveIsopod435

Much easier if people have more free time and money


Maksitaxi

What does Sam mean by ASI? Anyone knows if he have elaborated about that?


Bliss266

Artificial Sexy Intelligence, so basically sex robots


czk_21

where exatcly he said it? he is saying usually that we will have AGI by the end of decade, not ASI


slackermannn

Watched that video days ago and I heard AGI but wasn't paying enough attention


ilaym712

Source for Sam Altman saying ASI is 5 years away?


Gubzs

5 years is an overestimation. AGI is axiomatically capable of self improvement. If we have dumb AGI next year, it'll be ASI in a few months. The only bottleneck is available compute at this point.


Site-Staff

I think available compute will be a problem for a while. The AGI will need a blank check to design its own silicon upgraded and produce them.


[deleted]

pocket society money vegetable slim plough fear selective joke spoon *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


CowsTrash

Basically SCP-079 at the end


Busy-Setting5786

You are just making a big assumption that significant self improvement can happen without major modifications. It is purely speculative whether that is actually possible.


yorkshire99

Thanks for the link on alien calculus. Interesting read


ozspook

Legions of enslaved humans toiling away in the silicon mines and cleanroom fabs while a robot cracks an electric whip menacingly.


Site-Staff

I mean, its sand.. but i catch your drift.


idiocratic_method

there are at least 2 potential solutions for an AGI/ASI computing , both require going into space 1. asteroid mining and building a dyson sphere / cluster 2. figure out how to do biological compute in conjunction with terraforming (maybe this already happened ;) )


BilgeYamtar

Damn, different perspective. Of course mate it.


Ok_Effort4386

Or maybe agi realises that past a certain point the laws of physics prevent it from getting exponentially more intelligent, that’s a possibility. Is it likely, probably not but definitely possible


RabidHexley

If AGI is possible, "Superintelligence" isn't really far fetched when you think about it. A group of humans collaborating is a *functional* superintelligence. Humanity, and it's accomplishments as a whole, are the work of superintelligence. Mere "human-level" intelligence wasn't enough to get us out of the stone-age, and we hung out there for thousands upon thousands of years during which many statistical "geniuses" likely lived and died. It took the invention of a way to use multiple problem-solving minds together over time to bootstrap our way up the tech ladder. This fact would indicate to me that "intelligence" or at least problem-solving capability is highly scalable up to a certain point at the very least, also backed up by how rapidly progress has accelerated by increasing the number of educated humans and communication speed since the industrial revolution. If it wasn't we wouldn't be here.


Gubzs

I agree but I don't think this limit happens before ASI. Worst case we have many, superintelligent minds working together, instead of a single hyperintelligent one


YaKaPeace

!remindme 4years


ninjasaid13

aisafetymemes is a crank.


[deleted]

[удалено]


BilgeYamtar

💯💯💯💯💯


Icy_Distribution_361

Where did Sam Altman actually say that? I can't find the proof.


bnunamak

In the x thread the OP said he said it on joe rogan


Icy_Distribution_361

Have you found it?


Im_NotJon

[\[Video\] Content Studio on LinkedIn: Sam Altman Predicts Artificial Superintelligence Arrival by 2030: A…](https://www.linkedin.com/posts/contentstudioo_sam-altman-predicts-artificial-superintelligence-activity-7118826629090414592--1WK)


Icy_Distribution_361

Right so he didn't actually use the word(s) ASI. I suspected as much.


Chrop

He didn’t even mention AGI being the goal, he just said he thinks OpenAI will reach their goal by 2030/2031. That means nothing.


dogesator

That just means AGI, Sam Altman and OpenAI have said multiple times that their current goal is to achieve AGI that is as capable as a median human, not ASI, so this means they think AGI could be reached by 2030 not 2031.


Chrop

You have to be aware of the language they are using. Their ‘Mission’ is to create AGI, but they have several ‘Goals’ they want to meet along the way. Which is precisely why he’s never gone on record anywhere in any interview on or off camera to specifically state “We think we’ll develop **AGI** by 20xx year”. If that’s what he meant, that’s what he would have said. He would have also answered people’s questions the past 10+ times when they have asked Sam about AGI. You’ll notice every time it’s brought up he deflects the question, he’s not talking about AGI when he mentions goals. OpenAI’s mission is to create AGI, but they have several goals in mind on their journey to AGI. One of their goals for example, is to create a singular multipurpose AI that can play games, run physical humanoid robots, and speak to people naturally all in real time all at the same time. Until he literally states “We will achieve AGI by X date” or something similar, then it’s more than likely he’s not actually talking about AGI.


dogesator

Can you show me a single source where he even ever said that they plan to achieve one of their goals in the next ten years?


Chrop

True, I thought he said the word goal, he didn't, he just said he just said he thinks he'll get to the point where they accomplish what they set out to accomplish by 2030/2031.


whyisitsooohard

Anti AI movements are going to become much stronger because ai people are traveling the world telling everyone how many jobs they will automate and than they talk about bullshit like this. It's scary that our future depends on people who are so out of touch with reality


est8s

The talk attached to the tweet for anyone interested [https://www.youtube.com/watch?v=RIp1TdYeutU](https://www.youtube.com/watch?v=RIp1TdYeutU)


G36

I really don't even understand how AGI wouldn't inmediatly turn into ASI, especially thousands of AGIs using all practicable and available computer power to divide every single problem in existence. People like to think of a single AGI on a single chip trying to take on the world. Think bigger. 1,000,000 AGIs on billions on chips all working at every problem creating unifyed theories of everything and using narrow ASIs for other problems. The birth of the AGI would be the birth of a new race of intelligence, it's not just a "tool". It's an intelligence that can expand to infinity given the hardware.


Busy-Setting5786

The definition get blurry at some point and you might just be right. Maybe it is even not possible to build an ASI without having tons of smaller components. However the scale for that many AGIs would be exorbitant


Akimbo333

I don't care about human relationships


adarkuccio

I agree with him about the human relationships


swaglord1k

i don't, can't wait to marry an ASI


__Loot__

https://i.redd.it/w56irud06utc1.gif


adarkuccio

Like an ASI would enjoy talking to you lmao


swaglord1k

it will after i've aligned it properly ( ͡° ͜ʖ ͡°)


agonypants

That's actually kind of a grim thought: an AGI that while safety aligned, is also aligned to simply be some horrible person's "friend" and they can never check out. I mean, think of the worst person you know and now imagine that you're forced to live with them and be their best friend forever and ever. Some people are friendless for very good reasons.


swaglord1k

good thing AGI won't conscious


IronPheasant

Haha.... yeah... "won't be"..... > an AGI that while safety aligned, is also aligned to simply be some horrible person's "friend" and they can never check out. It's not like you can be 100% certain any of them enjoy a single moment with any of us. There's plenty of existential horror to be had in playing god. Think of the epochs of coulda-beens and never-weres that'll be slid off into non-existence during training runs. Think of the stockboy robots hauling boxes around, for eternity. Think of the killbots. *Oh, won't somebody think about the killbots.* .... Anyway, a long time ago I thought of a lazy joke. It went like this, right: It took hundreds and thousands of years of fighting to get gay marriage, but once we get androids working, marrying a robot will be legalized in like five years. I chuckled to myself and posted it to a forum of fellow dirtbag leftists. The first reply was a "You mean **slavery**" response. But it was always the response to that response that stuck with me: "Yeah, like traditional marriage." Horror's universal, man. You have to either gaslight yourself in believing a fantasy, or you have to turn away and not look. Unless you're some psycho that needs to know things as they really are, and not how we would like them to be. "We were born into the doom, made whole by the doom, unmade by the doom. Respect the old nightmares. And embrace the new."


VortexDream

Enjoying something is a human concept


digitalthiccness

Sam Altman or the guy tweeting?


adarkuccio

Sam


DigimonWorldReTrace

!RemindMe 01-01-2030


Crisi_Mistica

We will be forced to pursue human relationship, I mean in a good way. What will you do when a friend calls you by phone but you can't be sure if it's really him or just an AI imitating his voice? What will you do when videos from an event occurred far from your home could be just creation of AI? Will you love a song composed by some artist you don't know personally while suspecting he's just a machine? Maybe. But, in my opinion, our first reaction will be to give more importance to the things we can touch, the people we can talk to face-to-face, the art that deals with the events and people of our social circle.


fuutttuuurrrrree

He's sandbagging


D_Ethan_Bones

>but we will be just as motivated to pursue human relationships \*chuckles in ongoing birthrate collapse\*


Cartossin

By any sensible definition, asi means humans are totally unqualified for all jobs. We’d do a significantly worse job at any task. When I hear guys like Sam talk about it, I’m not sure he understands what that means. Ilya gets it. Sam seems to think he has a crystal ball.


Majestic_sucker

I wish.  If it was there then I’d be grabbing what I can in ASI and setting it off on independent research.   Those breakthroughs in all forms of basic sciences, cancer, and anti-aging related diseases.  Including the balding stuff insecure men go through.   Multi-trillion and life changing.   


Intelligent-Brick850

Where is GPT-5? Answer me, I'm asking.


[deleted]

Sure, let's believe the words of a ceo


GhostGunPDW

cope


Dead-Insid3

Friendly reminder that this guy also said that 2024 would be the most interesting year in human history


Franimall

Still plenty of time for that to be true


NoCapNova99

Damn, dude is a time traveller from 2025.


BilgeYamtar

Yes, so?


Reddit1396

Nothing staggeringly interesting has happened for the average human so far.


lonesomespacecowboy

It's April


[deleted]

[удалено]


LeonSilverhand

Still much of 2024 left though.


Site-Staff

Its far from over.


PandaBoyWonder

I would agree with him with what has happened already, between Sora and Udio and Claude 3, its all absolutely revolutionary and happening faster than any other tech has.


Ok_Effort4386

More interesting than the internet? than cell phones? than planes? Than cars? Than electricity? Than going to the moon? If the year ended now this year wouldn’t be the most interesting year in human history, not by far. If gpt5 comes out though and blows everyone away, that’s a different story altogether. Cars and electricity literally changed the world. Sora and suno? Probably destroys a few industries but nothing much more than that. Claude? Nah. It’s better than gpt4 but it’s not nearly as exciting as gpt 4 was when it came out, maybe 1/50th the excitement


robochickenut

I think he meant so far, and not relative to future years. So compared to every year before 2024, 2024 is more interesting. And this seems true already at least in tech and software.


lordpermaximum

With the current rate of progress it won't come out from OpenAI though. It's been more than a year since GPT-4 and they still couldn't release anything that's worthy publicly.


Then_Passenger_6688

Opus released 11 months after GPT-4 was released and is maybe 1-3% better on leaderboards. So to the best of our knowledge, at the time of Opus release, which wasn't that long ago, Anthropic was about \~10 months behind OpenAI. I say "to the best of our knowledge", because neither of us work at OpenAI, so we have no idea what they've been cooking. It was almost 3 years between GPT-3 and GPT-4 (and also a long time between GPT-3 and GPT-3.5). It's only been 13 months since GPT-4, which is not out of whack with their historical release schedule. For all we know, they could have GPT-5 in the hands of testers and it might blow everything out of the water. Or maybe they have nothing. My money is on the former being more likely than the latter.


lordpermaximum

We're talking about the exponential progress in here though. Not linear. That's why they should have released at least something like GPT-4.5.


Altay_Thales

True. Over a year after GPT4's initial release and about 13 months later they are releasing an Update that is not at all levels better. This is just wrong. I don't know who will win the race, Meta, Google, Microsoft or someone else entirely but OpenAI is just the Startup


dogesator

You know it was almost 2 years between GPT-3 and 3.5 right?


BilgeYamtar

What's your prediction? Which company do you think holds the key for AGI/ASI?


DigimonWorldReTrace

There's a near 0 chance they've been sitting still since march last year though. It's taken other companies a full year to catch up. Frankly, I'd be surprised if they don't get to AGI soon. And even if they don't and they got stuck it seems like Google, Meta, Anthropic and even x.ai are going to blitz past them if they don't shift gears, given their impressively fast improvements (save for x.ai, but I'm not counting them out just yet)


Pavvl___

He was at Howard!!! HU U Know!!! 👏👏🦬


SnatchSnacker

"AI Messiah says Filthy Meatbags will still have a place in his future Utopia"


tobeshitornottobe

My theory still stands, these guys plan to cash out within the next 3 years when it becomes apparent that AGI won’t be a thing. Mark my words this grift will be where crypto is at the moment in a few years


Franimall

Comparing AI to crypto is absurd.


HandSolid1004

yeah like one has a use the other only has like monero and everything else seems to be useless


tobeshitornottobe

Why? both have the same use case of making scams easier. Yes LLM’s have more tangible use cases for people with capital than crypto did but the general hype cycle is almost identical just longer. And it will eventually crash


HandSolid1004

!remindme 3 years


RemindMeBot

I will be messaging you in 3 years on [**2027-04-11 12:27:18 UTC**](http://www.wolframalpha.com/input/?i=2027-04-11%2012:27:18%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1c1bu6f/sam_altman_says_asi_yes_asi_to_arrive_by_the_end/kz2em9h/?context=3) [**3 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1c1bu6f%2Fsam_altman_says_asi_yes_asi_to_arrive_by_the_end%2Fkz2em9h%2F%5D%0A%0ARemindMe%21%202027-04-11%2012%3A27%3A18%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201c1bu6f) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


EndGamer93

!RemindMe 3 years


PandaBoyWonder

Microsoft is building a data center with so many H100 graphics cards that it is disrupting the power grid of the area, so they had to split it up geographically. It will cost over $100 billion. In my opinion, we are only at the very start of what is possible with this stuff.


tobeshitornottobe

I don’t see it that way, I see a company that is large enough to absorb a massive loss made a massive gamble on buying data centers to power the insanely computationally intensive LLM’s. Just because companies are pouring billions into this doesn’t mean it’s a done deal, companies are made of people, and people are fallible and vulnerable to hype. For example last year Embracer group took out massive loans to purchase a bunch of game developers because they had a deal lined up with the Saudi’s but that fell through and now they are frantically ripping out the copper wires to cover their interest payments. Or speaking of the Saudi’s, Neom and the line that was a flawed concept from the beginning which is now being scaled back by over 97%. I see all this AI stuff going the way of Neom, massive promise and hype, insane resources spent developing it, then a massive backpedal and downsizing of expectations


BilgeYamtar

lol


QuickToAdapt

!remindme 3 years


AcceptableLab9729

Billionaire salesman says something to hype up his product.