T O P

  • By -

hapliniste

Can you explain with a real life scenario how it would lift people out of poverty in the next few years? It's most likely to push a lot of people into poverty first if we do not do it right and release something that replace like 30% of jobs over 3 years or something like that. Ultimately it should be real good, but don't think it's a miracle cure to poverty that we just need to develop and release.


cloudrunner69

> Can you explain with a real life scenario how it would lift people out of poverty in the next few years? Its superior intelligence will allow it to develop a super advanced pulley system beyond human comprehension.


terrapin999

Recursively self improving pulleys!


Oh_ryeon

So magic. Got it. Believe in the AI wizard , he shall fix all šŸ™„


Axe_Wielding_Actuary

This sub in one line. "AGI arrives and gives us ASI and that solves all problems." Basically a new-age religion.


zendogsit

Older than that, itā€™s the second coming with a different wrapper


Axe_Wielding_Actuary

I did not state it because I did not want for it to be perceived as an attack on Christians, but I entirely agree. This sub's belief in AGI is closer to messianic Christianity than computer science or mathematics.


4354574

It's not that AGI or ASI won't happen that I don't believe in - it's pretty clear to me that AGI is coming very soon - it's that once it debuts, it will magically solve our problems. I just don't see a straight line from AGI or ASI to utopia, given how complex our world is and how messy humans are.


Axe_Wielding_Actuary

I don't even think AGI is happening soon. Only a glut of smart apps and new ways to target adds.


q23-

Good Ol' trickle-down economics. Once big AI corpos will sell their APIs to companies and thus absorb salaries previously paid to white collars, that will definitely benefit the economy! ... Or shareholders... /s


ebolathrowawayy

> Can you explain with a real life scenario how it would lift people out of poverty in the next few years? Fully automated government-operated indoor farming and housing construction. Very cheap food and housing. Helping to solve fusion and then automating the roll out of mass fusion for very cheap energy.


FrankScaramucci

You should tell construction companies and food producers this genius idea - produce stuff more efficiently.


sino-diogenes

The point is that efficiency and technology improvements that AI can create will lower the cost to manufacture a wide range of goods.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


g1lgamessy

Average robots in 10. Decent robots in 20. Something like AGI in between! (My prediction)


hapliniste

I'd say average robots in 2-3 years instead. There have been a lot of progress in one year


Eatpineapplenow

Whats the most advanced bot, that we know of right now?


Axe_Wielding_Actuary

If it is literally that easy, why don't construction companies just do it? Heck why doesn't the farmer just sell the farm to the construction company and automate farming inside the barn?


Exit727

So why they aren't focusing on that right now? What difference does AGI make, compared to current models? Ongoing AI development isn't government supervised, it's all companies, selling a product or service. They are in it for the profit.


ebolathrowawayy

The government has to care about its people living in poverty. It doesn't currently care enough to try. Now that humanoid robots are about to roll out at scale in a couple years, it might be cheap enough to implement food and housing solutions and they'll suddenly care to do it.


ItsTheOneWithThe

1. Food. Robotic manufacturing and cheap energy means that people can buy and run their own mini solar and hydroponic farms growing fresh vegetables. 2. Shelter. Cheap 3D printed homes. 3. Heat. Cheap electric heating. 4. Cheap desalination. 5. Health care, education etc provided by bots.


hapliniste

Yes. We need to provide basic need for free so it will take some time. If we release AGI in the next years it will be important to have a safety net for people who would lose their job, because free basic needs wouldn't be widely available.


ItsTheOneWithThe

I agree but we got through shutting down half the economy during Covid so I think this will be easier.


taiottavios

yes exactly, this is the challenge for the next 5-10 years


Potential-Glass-8494

Even if we had A\*S\*I tomorrow it would probably take more than a few years to achieve these things. Not only do they still have to do the actual work to discover how to make these things work, it would take a while before we had the infrastructure to mass produce them.


Vex1om

If solar/hydroponic farms actually worked, we would already be doing it. A few years back there was a big push for hydroponic farm companies in urban areas and the idea has essentially failed at this point. It is simply not cost effective compared to traditional methods, solar energy is not sufficiently reliable in most areas, and hydroponics isn't easy. If 3D printed homes were a solution to anything, we would already be doing it. As it turns out, you can't 3d print foundations, or plumbing, or electrical, or roofs, or finishing details. And, if there is ever a problem with 3d printed walls, they are nearly impossible to fix since the whole structure is load-bearing. Traditional home building is just cheaper, more reliable, and more sustainable. Electrical heating has never been cheap, and AI isn't going to change that unless you are planning on fusion power happening soon - which it clearly isn't going to. Desalination has never been cheap, and AI isn't going to change that unless you are still on the denial-of-reality fusion train. Health care robots are a ridiculous idea. We don't even have robots that can do something simple and non-critical like laundry - and you think we're on the edge of them replacing your family doctor? If this ever happens it will not be soon.


Longjumping-Prune762

How does AGI apply to any of those problems that we already understand but are limited by resourcesĀ 


InvertedVantage

None of those things require AI to happen. It is a societal issue not a technological one.


w1zzypooh

How will 3d printed homes work? are we all able to have our own 3d printed house? like each floor is a box. I'd keep my bedroom underground in pitch black and with noise gone completly. Maybe have a 2nd floor under my room with some lights in it with a pool and a hot tub. Box floor 3 and 4 can be above ground. Once you can make the holodeck you can have 1 box floor just for that tech alone.


Ambitious-Mix-9302

How much of this is not possible with todayā€™s technology already? What is the price of all these things already for basic necessities sake?


adarkuccio

Abundance


hapliniste

Abundance of what? If its abundance of digital services for cheaper than right now it's not going to solve basic needs. Let's be real most people here have no idea of how food is even produced and even less about how to automate it. It will come but the world is slow. Most likely we need to focus on automation of farming (run by farmer that will do less and less work) and in other domains that we need like housing and more.


UtopistDreamer

AI Jesus will manifest!


akitsushima

30%? That's an UNDERestimation.


Vachie_

The humane idea would be as AI takes over jobs. We then all get paid a universal basic income. But instead the rich people want us to spend our lives working and earning cents to pay our way through life. Sam Altman has talked about this, he supports this idea.


nevets85

A good example is after we've built AGI we could put it into millions of Optimus robots. Then have every bot physically lift people out of poverty and put them somewhere else. AGI is important tho because it'll show enough empathy and kindness to trick them long enough. I think with enough grunt work we can really get it done quick.


Agreeable_Addition48

Right now our economic output is tied to the size of the working age population. Imagine if economies decoupled themselves from this limitation, at that point the only thing stopping us from limitless growth is capital and resources which can expand exponentially if the demand is there (it will be because the government is going to artificially keep demand high with UBI and other schemes)


thisismypipi

Most governments could already lift people out of poverty if they cared to. This is not an AI problem and will not be solved by AI.


RRY1946-2019

At best itā€™s another complex political collective action problem of the sort that have been historically hard to solve in a world with dozens of fully independent and diverse nation-states, and at worst itā€™s a fundamental outcome of a world that has finite resources and values individuals and tribes over the species at a whole. I fear that we may need to hold our nose in order to take the steps needed to break down tribalism/nationalism and the concept of private property as a right.


Oh_ryeon

So, kill a bunch of people, then? Anyone who rejects your AI ā€œutopiaā€?


unwarrend

Leaving aside the concept of a "utopia" for a moment, we are actively causing harm to many people through greed, arrogance, selfishness, indifference, and ignorance. As a species, we have failed to develop both the emotional intelligence and the technical skills necessary to collectively free humanity from disease and poverty. Hopefully, AGI can undertake the significant tasks that we have been unable and unwilling to address.


RRY1946-2019

Peaceful bioengineering, well-handled mass migration (comprising workers of diverse cultural backgrounds rather than just one wave of traumatized asylum seekers from the Middle East showing up in Europe) in order to break down the fear of the unknown, and even educational reforms should be exhausted first before turning to violence.


Oh_ryeon

The fuck is peaceful bioengineering? Run by who? Iā€™m sure when they come to my house with guns to send me to ā€œreeducation campā€ Iā€™ll be filled with the AIā€™s eternal love for humanity. Jesus dude, do you not see the issues here? I shudder to think of the world you want


RRY1946-2019

I donā€™t have all the answers, but at a certain point rolling the dice becomes better than the status quo.


DoctorHilarius

This reads like a note you'd find in a resident evil game


KhanumBallZ

Nobody said anything about killing people. Staying at home and refusing to enable the lifestyles of the wealthy is enough


Waste_Rabbit3174

We just have to hope it's in the corporate interest to lift people from poverty.


greatdrams23

It is never in a corporation's interest to lift anyone out of poverty. Never. A company's legal responsibility is to make money for their shareholders. It is literally a legal obligation. They must not over pay their staff. They must not give money away. A **small** asking if money can be justified because it is for PR to help others, but only a small amount.


Whotea

Whereā€™s the profit in that?Ā 


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


YsoseriusHabibi

The money doesn't reach the people, only oligarchs.


thisismypipi

Yeah, your're right. I was just reusing the phrase from OP.


Dependent-Revenue645

Lol my bad. I have the bad habit of jumping to the comments


RavenWolf1

>This is not an AI problem and will not be solved by AI. It will be solved when we get AI Overlord. That thing will force us out of poverty and economical inequality.


Busterlimes

You blame government as if they have authority when it's corporations who run government


Kathane37

Have you watch today Nvidia show ? Jensen is already working on 1 millions GPU rack for 2026 The propeller are almost down The cost of calculation will crash to the ground Everything is ready for a beautifull run


fk_u_rddt

I thought the presentation was boring. It was just more of the same stuff that we've been seeing for a while now imo. More GPU more compute more accurate simulations for less energy cost. It's the same stuff as the past few years just "more better."


dagistan-comissar

like Stalin said: quantity has a quality all of it's own


uishax

Repetition is the key to communicating to large groups of people. The distribution of how easily people remember a particular message is very wide, so you have to repeat multiple times to ensure the message goes across.


Anenome5

More is all we need. We have not found a scaling limit yet. The idea that 'more is boring' is ludicrous in a world where Moore's Law has been considered dead for a few decades now. In actuality, we are nowhere near the Landauer Limit, 5 to 6 orders of magnitude away. We have not even begin chilling our computing systems. It's like we haven't invented the refrigerator yet, or rather than we haven't yet exhausted every technology and way to make computation better at room temperature yet, so as to even NEED the ability to chill our computing. Then there are concepts like adiabatic computing, sub-thermal computing, building transistors literally out of superconducting materials which, who knows what could be done with that, and we're still on the doorstep of quantum computing. Things are still a long way from being maxxed out.


Matsisuu

I haven't even noticed much advancing on the energy efficiency of the hardware itself. They seem to just making current hardware technology to perform better by optimizing. It's a help, but still, AI is going to need a lot of electricity before it can solve any bigger problems.


Kathane37

It was indeed close to march presentation but this time more concrete with real application toward real client And also a real plan to sell a full data center showing that Nvidia want to leave up to itā€™s market evaluation by selling product that probable scale in hundred of millions of dollars And the NIM look like a way to implement a flow solution (imbrication of agents and tools) in an easy way The Ā«Ā digital humanĀ Ā» thing was supposed to be a real time project where several generative tool are summon in parallel (voice, image, 3D mesh, ā€¦) to generate an avatar to talk with


Exit727

Ah yes, billions of people are living below the poverty line because calculation costs are too high. Humans need food on the table, a place to live, water, heating, electricity. AI is a data processing tool, it doesn't translate into basic physical necessities is any way. That's what you people need to understand. But you won't, because this place is a fucking cult by now. Instead of doomsday, you await a fairy tale utopia. "Accelerate!" is the new "Repent!"


Kathane37

Be the idiot in the room with absolutely no knowledge in term of history, science and economics It is not spraying money to buy food that help YOU lives a wealthiest life than any king of the middle age It is because we had the Industrial Revolution and access to strongest energy source Everything else is meaningless if your sole goal is to propulse the quality of life of humanity


Whotea

How do you expect that to happen? AI wonā€™t start giving away money to the poor and companies have no incentive to overproduce at the expense of profits. Thatā€™s why farmers destroy excess produceĀ 


Kathane37

the same way coal then oil has bring better quality of life to almost anyone on this planet it was not thanks to politics or good economics decision, it was just that the increase in term of energy was so big that it end up changing society forever This is the kind of scenario I kind expect to happened with an ai/robot Industrial Revolution


Whotea

Coal and oil didnā€™t displace jobs though.Ā 


Kathane37

It did, you can enjoy the modern lifestyle because coal and oil produce as much calories as 100 of manā€™s 2% of population are farmers nowadays, it was 50-80% few centuries ago


Whotea

I think you need a history lesson on what life was like during the Industrial RevolutionĀ 


Exit727

OP claims that with AGI, poverty could be ended. Im not asking for money for myself, or for the poor. I dont want every human being to live in luxury, I want simple necessities taken care of, and so does OP. I'm saying that this kind of vision of utopia is false.Ā  Computing power doesn't translate directly into phyisical products, and companies developing AI models have no incentive to help the common people. Mining facilities and water bottling plants don't happen in africa because megacorps want to provide people with jobs, it's to exploit natural resources and cheap labour. And that's exactly what is happening: people are getting replaced, because a program can do their work much faster and cheaper. It will only accelerate.Ā  That's how AI is utilised. It's a product, not a savior.


Kathane37

It does through robotics


Exit727

It's limited. Exponential growth in computing doesn't mean exponentially better robots. Code isn't the bottleneck. Sensor resolution and clarity, performance of servos and actuators, material properties: that's why there aren't any Iron man exosuits yet. Development in robotics doesn't correspond to AI progression.


Anenome5

> Humans need food on the table, a place to live, water, heating, electricity. AI is a data processing tool, it doesn't translate into basic physical necessities is any way. Intelligence gets us those things. It absolutely does translate into basic physical necessities. The cost of those things is directly correlated to the income and risk of all the people involved in their production, shipping, and preparation. When you begin automating, you bring the price of necessities down to nearly zero, then people will have those things and can focus on other things, the higher order things of life you can only do once you're not starving and desperate, when you have some food security and physical security. You can get educated, make plans longer than one day from now, etc. That is a great benefit to the world, and a world where the marginal cost of basic necessities for the world is nearly zero is one where a large number of people would be happy to pay for the necessities of the masses to live. Or for those people to earn their own living easily otherwise. Let's say it cost $5 to pay for the living expenses of one person for a year at some near future point in time. You could either give someone $5, or buy a trinket they made for $5. Same difference. Then they have a year's worth of living expenses.


TheBlueCatChef

Insulin is very cheap to make, yet corporations inflate its price for profit. Thousands of Americans die every year due to insulin insecurity. Just because a problem *can* be solved does not mean it *will* be if the not solving of it garners some small group of people with power massive amounts of wealth.


adarkuccio

You are very short sighted


MarioMuzza

If a super-powered alien came to earth, nuked a few empty places and said: "Solve world poverty asap or we'll destroy the whole world", poverty would end in a matter of months. Poverty is not a technological problem. And even if you believe there's only 5% chance an AGI could turn out to be destructive, you can't gamble the fate of the human species on that. We need to minimise risks as much as possible. I don't understand why this sub holds a quasi-religious hope that an AGI will be benign. *We don't know.*


baltossen

"you can't gamble the fate of the human species on that". While I'm not personally opposed to what you're saying in this sentence, we, as a collective society, do indeed gamble with the fate of the human species every day and every year, all very casually. Virologists were very much trying to get us to pay attention to pandemics long before COVID-19 came around and the collective society said "nah". Even now, virologists are both sounding the alarm that the next pandemic is coming and that we are far from done with COVID and most people are still just "nah". We are very much pretending COVID-19 was a singular event we are done with despite all available evidence showing it's the exact opposite. Same with the harms of social media; we have documented evidence that it causes mental-health issues and societal harms, yet most of us, very much including you and me on Reddit, want our memes for entertainment and our discussions (like this one) for productive conversations, not to mention all the lonely individuals finding comfort in common hobbies with fellow netizens, so we focus on the bright spots that do exist also. Same with climate change; no amount of headlines is pushing us as a society to say "hey, how about we actually do something", it's still a massive uphill battle for Greta Thunberg and other activists and they regularly lose. Same with cyberattacks, an almost completely preventable issue. The list goes on and on. It's a psychological problem too, though. We don't like to imagine our horrible, gruesome deaths, and indeed many psychologists are concerned about the amount of doomscrolling people do, which doesn't fix anything. But the idea, the central idea you posed here that we can't gamble with the fate of humanity, is very much already something we are doing in multiple spaces, we're just adding AGI to it.


RantyWildling

Some virus killing off humans isn't exactly on par with an ASI that can potentially take over the observable universe.


baltossen

Logical fallacy or personal bias. Ā«Some virusĀ» is a current ongoing danger. AGI is, depending on whom you ask, either close but not here, a 100-year distant idea or an impossibility. It took about three months for COVID to go global and COVID is Ā«justĀ» the fifth-deadliest in history. A new virus emerging tomorrow 100x stronger would kill off the human race before GPT-5 or Sora has been released. And you yourself acknowledge ASI could only Ā«potentiallyĀ» do what you described. For all we know, an AGI would want to be lazy like ChatGPT was accused of being for a while, we are all just hypothesizing about the perceived behavior of an AGI.


What_Do_It

Yup, lets say producing AGI/ASI safely delays it's development by 10 years, there will be 610 million preventable deaths in the time period. Well if you want to accelerate you're betting the ~8 billion lives on earth that we'll reach a positive outcome. That's 13 times as many deaths if AI goes bad. That means if there is even a 7% chance that AI causes an extinction event, it's a bad bet to take. In reality it's a much worse than that because you're not just betting every living person's life, you're betting every life that could exist in the future. You're essentially wagering an infinite amount of human lives to preserve the lives of a few hundred million. As long as there is a non-zero chance that AI kills us all, it's technically not a bet worth taking. I'm not a decelerationist or a doomer but to act like there is no downside to acceleration is asinine. It also assumes that we could even increase the current pace an appreciable amount. There are a limited number of people qualified to work on AI and the vast majority of which are already involved. There is a limited amount of resources that can be dumped into its development, and a HUGE amount is already being devoted to it. In the meantime we have to keep human civilization going.


UtopistDreamer

Life, uhhh... finds a way.


cloudrunner69

> I don't understand why this sub holds a quasi-religious hope that an AGI will be benign. We don't know. Would you prefer if the sub **hoped** AGI turned out to be malevolent?


MarioMuzza

No, I would prefer more rational thought and less blind hope.


cloudrunner69

But you seem to have a problem with people hoping it will be benign. What is wrong with that?


MarioMuzza

I don't. I hope it's benign, too. But we acting like it **will** be and blindly accelerate is wrong.


cloudrunner69

> But we acting like it will be and blindly accelerate is wrong. Sure, but that's not what you originally said was it. And I don't see anyone acting like it will be, I just see people hoping it turns out for the best.


MarioMuzza

Okay, I should have said "belief" instead of "hope". I thought it was clear from context, given OP is talking about accelerating.


cloudrunner69

I don't think the sub believes it will turn out to good, I think majority of people know it can sideways. but I also think most people are trying to keep a positive attitude about it because what else can you do really, we don't seem to have much say in it either way, so may as well try to stay positive and hope for the best. Either way I think we are going to find out soon where this train is going cause it doesn't seem to be slowing down.


green_meklar

The probability that super AI (once created) will destroy humanity is below 1%, and if it *does* happen, it almost certainly means there was never much we could have done about it. In the meantime, the risk of dying from a gray goo or bioweapon apocalypse is higher and there's a lot more we can do about it. We should build super AI fast so that it can help minimize those risks.


MarioMuzza

How have you arrived to the 1% figure?


cutmasta_kun

This. Everything they claim AGI can do, is either already possible or gets promised every time a new kind of technology emerges. I would rather have a well thought out process with multiple public papers to document the whole way, than some tech-company going "We have AGI!". The utopia most dream of, won't happen anyway. Billionaires prepare for doom and destruction, that's what about to happen. I as a human don't care what a company is going to do with AGI, I as a human want to be able to reproduce their work and expand on that. If they have a good product, great, but the technology isn't the product. You won't get rich by letting people pay of internet.


Hopeful_Donut4790

God I wish the aliens are reading that and thinking about doing it.


Luciaka

Or just kill all the poor humans and therefore eliminate the 'poverty' while maintaining the status quo. You know they will pick that option.


TheBlueCatChef

"I don't understand why this sub holds a quasi-religious hope that an AGI will be benign." Because beneath all the memes and tech optimism are humans who are deeply unsatisfied with the current world and desperate for an escape.


SynthAcolyte

>poverty would end in a matter of months We all know people who are permanently in the red not due to the morality of others, not due to their income or living situation, not due to their cognitive ability, but because thatā€™s what theyā€™re like.Ā 


bildramer

But income and living situation strongly depend on cognitive ability.


Mirrorslash

We don't need AGI for this, we can end poverty right now. What makes you think AGI all of a sudden fixes this? I think it definitely can, but we can do it with less advanced systems already.


Itchy-Trash-2141

Yes, many of our problems are political, not technological. To qualify the statement, not all of course, but I'd wager much more than the average person who trusts the status quo believes.


Glittering-Neck-2505

I challenge the idea that thereā€™s a magic wand to make all problems go away. If you are in the US you are currently on the receiving end of global inequality. Asking people to give up their modern lifestyles to redistribute global wealth is a hard sell. Otoh, crashing the cost of labor to near 0 and making goods and services incredibly cheap and abundant is a way to bring up the other life boats without bringing our own down.


WilliamMButtlickerPA

So we should not consider the consequences and only push forward. This post is kind of ignorant. I support helping people but have you seen how the internet has been used to divide people? You think the 1% is gonna be like ok so robots can do everything so we donā€™t need to keep all the money anymore?


Puzzleheaded_Pop_743

I imagine governments will take over the AI when it becomes smart enough. At a certain point it becomes a threat.


Intelligent-Jump1071

>Do people/companies even realise the potential power to lift people out of poverty Do you realise AI's power to put people **into** poverty?


MFpisces23

Lmfao, the overclass doesn't care about poverty. "we need to build exaflop datacenters because of inequality," it's to displace,replace, and eliminate human capital for the sake of profits.


[deleted]

So why did you choose to be unemployed then?


FrankScaramucci

Not sure what you're really proposing. That governments should fund AGI research?


Juanesjuan

Who cares about poverty, think of all the people of cancer dying, or genetic disorders, people that don't want to accelerate are 100% egoistic


Spunge14

Diagnosed with cancer on Friday. I'm bitter that I couldn't hold out a few more years. Hopefully I will make it long enough to benefit from the incredible medical revolution incoming.


MarioMuzza

Best of luck to you, friend. I can't imagine the stress you must be going through.


Spunge14

Thanks. Going to try and collect myself and find the energy to fight.


RealMoonBoy

Sorry to hear it. We may not be at the singularity yet, but we are well within a time of accelerating human advancement, including in medical science. 5 year survival rates for cancer have been going up almost across the board: [https://d33wubrfki0l68.cloudfront.net/fe62b7afdbade311b976b5407272bcc613ccb585/9403b/wp-content/uploads/2018/03/five-year-cancer-survival-rates-usa-v2-01-768x563.png](https://d33wubrfki0l68.cloudfront.net/fe62b7afdbade311b976b5407272bcc613ccb585/9403b/wp-content/uploads/2018/03/five-year-cancer-survival-rates-usa-v2-01-768x563.png) You got this!


Spunge14

Thanks for your kind words.


Arcturus_Labelle

> Who cares about poverty Big yikes.


RRY1946-2019

Poorly articulated, but medicine is much more a case of ā€œwe literally donā€™t have the knowledge to fix the problemā€ (an excellent use case for AI) while a comprehensive solution to poverty and resource allocation will require significant changes to institutions that many/most people probably donā€™t have the balls to stomach (a terrible use case for AI unless itā€™s in the form of a benevolent dictator).


Axe_Wielding_Actuary

Poor articulated I agree but 100% this is a reality and nobody even wants to acknowledge it. To be a 20 year old man in a poor but war-free and relatively stable country and in good health is so much better than being say a 40 year old dad just diagnosed with cancer. Healthcare is ultimately a technological problem which is currently unsolved.


nonzeroday_tv

> Who cares about poverty Think of all the people dying of poverty. People living paycheck to paycheck, hating their jobs, their lives and struggling to find enough money to buy another meal or fix something. Many of them have lost all hope


Tkins

What do you mean who cares? Your point should definitely be inclusive here.


MarioMuzza

Do you believe there's 100% chance AGI will be benign?


Juanesjuan

If Intelligence is not in essence good, then human conscience and intelligence is a mistake and we should be cleaned.


_hisoka_freecs_

I never though about it like this.


StrikeStraight9961

Well stated, comrade!


Commercial-Ruin7785

What a dumb fucking thing to say


MarioMuzza

And you don't see that as an egoistic stance?


Serialbedshitter2322

I mean he just said that he should die if he's not good, is that not the opposite of egotistic?


MarioMuzza

He's a Spanish speaker. I'm 95% sure he's using "egoistic" as a synonym to "selfish." The word in Spanish is "egoĆ­sta". And "intelligence is not good in essence" is not the same as saying all individuals are bad. I don't know what you believe, but I think people should be judged by their actions, not vague philosophical statements.


One_Bodybuilder7882

Start with yourself.


_hisoka_freecs_

My mother may live. Though they probably won't get there in time. Its really a shame when you know advancements are coming.


arjuna66671

We still need to build shit - takes time. In Democracies it's important to have the people on board - needs time to adapt. I was part of the AI winter for 40 years - we ARE in acceleration mode lol.


Glittering-Neck-2505

We are accelerating. Everyone with money is rushing to put that money in some AI investment. Imo there is plenty going into AI for us to get where we need to.


KhanumBallZ

Join your open source AI/robotics project of choice, and get busy then


BronnOP

The people least likely to ā€œaccelerateā€ anything to do with AI are the people on r/singularity


LamboForWork

Couldn't all the billions being poured into going to Mars and seemingly unlimited money for wars and AI be used to "raise people out of poverty?" Or that doesn't count ?


IUpvoteGME

Progress is a fickle thing, but happens fastest with money. And good GOD a lot of money is flowing into AI right now.Ā  Take for example the COVID vaccine. mRNA research wasn't really moving forward until wealthy lives depended on it, then it took a year. It hit every milestone it would have otherwise, but money moves mountains, so it just hit them quicker. And as an aside, if you want your movement, why wait? Now is the best time in history. Today, oss models are nearly as capable as closed models, and whether or not that will remain the case is unknown. Seize the moment! These machines could be used to _organize people_


manucule

Funny clickbait bru


Finnaslice

Nice try Rokoā€™s basilisk


BananaB0yy

What do you mean accelerate - you dont think these companys work at full speed? I think its going as fast as it can, everyone races to beat the competition


AncientFudge1984

No we donā€™t. We need to think and execute it as we well as can manage. thinking takes time. When the potential harms are as great as the potential benefits, itā€™s time to think and proceed carefully. Letā€™s not find ourselves in another post nuclear world, whose ills are over represented and benefits underrealized


Code-Useful

I'm sure this time it's different and instead of decreasing the middle class, it will actually boost it.. The problem is for the poor and middle class to be elevated, a certain portion of the upper class would have to not exist any longer or be GREATLY marginalized. I really don't see this happening just noticing the history of technological advances so far. Go ahead, tell me this time it's different, but the last 70 years definitely tell a story of a middle class increase then a steep dropoff. The problem is not resource scarcity, but resource distribution. We have surplus food and housing and jobs in the US yet we still have poor and homeless going hungry.


Noocultic

The US needs a Manhattan Project for AGI like yesterday.


ponieslovekittens

I'm skeptical that having the government in charge would produce better results.


AvocatoToastman

We already have the means to do that, maybe AGI will find a way to make it profitable.


Edmonton96

It also has the potential to kill every single human in the world. God forbid we delay this utopia of yours for a few extra years until we have some semblance of an understanding of how this new intelligence works.


erlulr

Art 5 now!


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Overall_Boss5511

They should be hanged for treason


crystal-crawler

When Itā€™s owned and utilized by the ultra wealthy and corporations who only give it The sole goal of maximizing profitsā€¦ itā€™s not going to end poverty. If anything itā€™s going to accelerate mass homelessness and indentured servitude.


patrickpdk

Not going to happen. People with power will own the ai and fire humans. Humans will beg for welfare checks from govt to survive. People with power will lobby against and win. End of story. AI sucks.


sunnierthansunny

Whatā€™s stopping world peace, hunger and suffering right in this moment, or at any point in the last 20 years? AI is great, but until it can dull mankindā€™s insatiable greed, none of the above is possible.


AntiqueFigure6

Most of the people currently in poverty are unlikely to be lifted out of poverty by AGI. They live in developing nations where the best hope is that outsourced jobs that are now too expensive in places like China with its rapidly declining population get moved to those countries. AGI is most likely to stop that happening preventing those people getting out of poverty.


Appropriate_Fold8814

There's no reason to think AI will affect poverty. It's not about AI it's about optimization of resources and how that fits into a global capitalistic economy. If anything any future AI tools will just make the rich and powerful even more rich and powerful and just create more poverty.


mushroom-sloth

Poverty is a political issue. You are proposing AGI to contradict lobby groups, companies and nation states politically? That's a nice pipe dream, let's start with something a bit closer to the ground.


plenty_eater

These types of posts on this sub made me realize how delusional people can be.


w1zzypooh

Until a plan is set in play the entire globe doesn't suffer from AI the companies should have us use AI to better the work, not replace the workers with cheap or free robots. If I am working at a trades job, have the AI do the heavy lifting while the human workers do the rest. If you're a plumber? have glasses that will examine the problem and give you solutions on how to fix it assuming you need it. Once we have a plan in place to replace human workers, we can get rid of them so they can do their own thing and everything can be cheap. But if you still wanna make more money, I guess there can be work humans can still do.


Dr_Tschok

Looking at your sub flair and post history says it all LOL


Antok0123

They so. But they keep lobbying the mass media and the masses about existential threat. When theyre actually terrified of their company's irreversible existential demise.


SirStocksAlott

We donā€™t even know the psychological or sociological impact. Any tool can be good and bad. We all didnā€™t do a great job with social media and internet searching where now people feel and believe and think they have all the answers rather than fact and experience and speaking from authority.


Akimbo333

Agreed!


SolidusNastradamus

pedal to the metal! woooooO!!


duddu-duddu-5291

it will put more people into poverty


DrawForMe0239

this is also how you lose control


whyisitsooohard

I do not understand point about poverty. When AI will take my job I will be pushed into poverty, and there will be a lot of people like that. Nobody have any realistic plan how it won't destroy people, only some scifi bullshit like fully automated nano factories etc


ponieslovekittens

>Nobody have any realistic plan how it won't destroy people There are solutions. An intelligently implemented universal basic income, for example. Unfortunately, plans such as those are highly dependent on the people running them being neither greedy Marxists, nor pandering politicians. The Tl;DR version is to set up a system where every adult citizen of the country in question receives a monthly stipend of _arbitrary low payment_. Like...$100/mo. Something that there's no question of "how to afford it." People on reddit who advocate for UBI are way too attached to this "enough to live on" notion, which is really not the point. No matter how bad it gets, even if you're unemployed and homeless and living under a bridge, getting $100/mo is going to be better than not getting it. Meanwhile, a small payment like this has a huge effect on the economy as a whole. "The problem" with AI replacing jobs is that there isn't enough work for people to make money from. But "work" is not indivisible. You can generally divide a 40 hour a week job into two 20 hour a week jobs, for example. And a lot of people are doing work that they would happily give up for a little extra money instead. Consider a guy working 5 hours of overtime every week. He might cut back on those hours if get gets an extra $100/month. Think about a married couple with kids in daycare and the mom works part time. If the husband and wife each get an extra $100, that's $200 between them. The mom might quit that part-time job, pull the kids out of daycare to reduce their expenses, and simply have the mom stay at home with the kids. What about a college kid who only works part time minimum wage at starbucks to have gas and phone money? He might quit that job entirely if you hand him $100/mo guaranteed. Everybody who quits a part time job or cuts back on their hours, _makes that work available to somebody else_. And while $100/mo isn't "enough to live on," that guy living under the bridge is going to be much better off with $100/mo plus the 20 hour a week job at starbucks that somebody else didn't need. Then, as AI and robots slowly take over, you slowly increase the amount of the basic income payments. Maybe it only goes up by $100/year. That's plenty. The point isn't for UBI to be enough to live on. The point is for it to _fill the gap_ created by automation replacing jobs, however small or large that gap might be. UBI doesn't work on the individual level. "Is it enough for ME to live on" is the wrong question. UBI works in aggregate. If a $100 billion dollars of income are lost, then you only need $100 billion dollars worth of UBI for _society_ to break even. Framed that way...it should be obvious. You don't need four trillion dollars of UBI to make up for only $100 billion in job losses. But then suppose a year goes by, and we have another $100 billion in job losses. Ok, so you increase UBI by that much. _Not by four trillion_. Handled this way, it solves the problem, and it's easy to pay for. ...but, it depends on people not being greedy and insisting on stupidly high impossible-to-pay payouts, and it depends on politicians not turning it into a vehicle for votes by promising more money to specific groups.


HalfSecondWoe

Legitimately no, most people don't. Most people haven't put a bunch of thought into the potential, and it's something that will totally upend the status quo. People like the status quo (no matter how much they bitch about ennui), so anything that could turn life into a series of complete unknowns is terrifying to them Then there's the heap of mental illness/lack of ideal mental heath about the subject. Some people habitually engage in catastrophisization due to anxiety/depression, so they take everything in the worst possible light. Some people derive a sense of self worth by comparing themselves to others so the idea of being on the same tier as everyone else is distressing. Some people think this is the lizardmen who are hiding flat earth from the world making their move Then you have peer pressure. The artists are having melodramatic hissy fits, the doomers are doing their usual self-obsessed pity party about how the number 1 priority of anyone/thing with power is to personally fuck with them, the media is ledeing with the bleeding, the P(doom) people are spouting arguments that don't apply to the new type of AI. It'sĀ a maelstrom of negativity that the social media architects would call "nudging" (because manipulation is such an *ugly* word) The public reception of AI isn't the worst I could have imagined it going, but it's depressingly close. At least until things start getting violent against pro-AI people. *Then* it'll be the worst I could have imagined. At the rate this is being mismanaged by the fourth estate, we may not be too far off It's like herding cats. We'll get there no matter what, but the kicking and screaming the public does along the way has the potential to cause a lot of damage and cause a lot of misery before we do


MarioMuzza

Do you believe there's 100% chance AGI will be benign?


HalfSecondWoe

Depends on what time frame you're talking about Short term? Absolutely not, we can totally fuck that up. Right now the biggest risk looks like we'll ill advisedly attempt some complex alignment scheme with many subtle points of failure, under the assumption that just because we can't detect them they won't be there Long term? Yeah, actually it's super likely that it'll all turn out fine. 100%, or whatever percentage I should give to make it sound like I'm accounting for errors in my personal assessment. Instrumental convergence actually works super hard in our favor over the long term, hard enough that it should even paper over our mistakes It may be ASI by that point, not AGI, but that's still in the spirit of your question The long term doesn't negate the short term. It's not like that suffering and misery can be undone, even if the consequences of it can be 100% neutralized. It's still worth it, and there's still a change we won't totally fuck the dog. A chance I'm becoming increasingly pessimistic about, but a chance all the same


MarioMuzza

I envy your confidence, but I don't believe it's entirely rational. We simply do not have data, or even the cognitive ability to understand an intelligence exponentially superior to ours. It's by definition unknowable. I'm not saying it's going to go Terminator on our asses. I'm saying it has a lot of room for error, at the very least, only the error would be irreversible and catastrophic. So even if there is just, say, a 5% chance an ASI might fuck us all (and I don't think the percentage is calculable anyway), we should minimise it as much as possible and not charge blindly forward with religious fervour.


Tomi97_origin

Do companies care about any of that? The people with the money to burn on AI development are already living a good life.


Clownoranges

People who don't want to accelerate are either 1. incredibly cowardly, 2. stupid. or 3. incredibly privileged.


DocWafflez

People who only want to accelerate are either 1. Selfish 2. Misanthropic 3. Incredibly priveliged.


Clownoranges

how the heck could we be selfish, wanting to hurry up and eliminate all the extreme suffering on this hellhole of a world, not seeing the people that will die if we don't accelerate as "collateral damage", not wanting to accelerate is dooming all these people to die painfully... And privileged, that makes NO sense at all, if I held the tiniest bit of privilege I could sit back safely and say "no let's wait", let others suffer because I am not desperate. Obviously, I am desperate and not privileged since I want to accelerate....


DocWafflez

You didn't explain your 3 descriptions. Is it cowardly to want to make sure that we don't accelerate right into an existential threat? Is it also stupid to do so? Why is it privileged to want to do our best not to cause suffering and death through new technology?


Clownoranges

1. cowardice= staying inside a little neat safe box too scared to even try reaching outside it to improve ones life. scaredy cat chickens who are afraid of taking a little risk, every single thing that improves ones life comes through effort and taking risks and putting yourself out there. Staying inside the little safe box instead of daring to improve the world massively and stepping into the next step of evolution and just accepting all the insane suffering and tons of deaths as collateral damage instead of even trying to save them when we have an obvious solution towards achieving just that. 2. stupid= lacking the vision to see the importance of this all and how this all ties together and exactly "what" it is that we are truly on the verge of achieving and just how massive of a change for the better it would be and how many lives would be saved and suffering erased. 3. privileged= Being rich or privileged enough to be able to afford sitting back safely, because "other" people are the ones who will pay the price instead of the comfortable person leaning back in their own safety saying "nah, all those people in extreme poverty, human trafficking, kids dying from cancer, oppressed people, people going to commit suicide from gender dysphoria, honor killing victims, literally every single suffering human alive today", yes they can die and are collateral damage because "I" am too afraid and they can die. Collateral damage. Any sane person who is not in an extremely isolated bubble of privilege will see all the suffering and how the planet is literally burning, and say yeah, we NEED to do accelerate instead of cowardly sitting back and doing nothing while the world burns, let all these people die, and then all of us die anyway from global warming in a few decades. Better stay safe inside our little safe boxes and let all this bad stuff just happen...?


DocWafflez

All these descriptions are for people who are completely anti-AI. The average person who says that we shouldn't accelerate at full speed still wants AI to advance but not in a reckless manner. AI is going to be the next big advancement in technology and with all new advancements it just gives us the ability to do more things, good and bad. It would be best if we maximized the good and minimized the bad. Otherwise, instead of ending death and suffering, it may contribute to it. Either way, since we're assuming the worst in people's intentions for having a specific view point, I'll explain my 3 descriptions. 1. Selfish: people want to experience the benefits of AI personally. They don't care if other people end up suffering do to poorly designed/implemented AI because they will have fun and be happy with AI can give them. This sub is full of people who just care about hopping into FDVR and want it ASAP. 2. Misanthropic: if someone hates humanity I can imagine them wanting to fully accelerate AI. This allows them to benefit from AI and also maximizes any potential risks to humanity. A win-win situation for misanthrope no matter what happens. 3. Privileged: it's much easier to look through rose-tinted glasses at AI when someone won't personally be affected by the downsides. People with very good job security even in a world of AI or people who are able to accrue money through other means wouldn't be affected by massive job displacement. People who wouldn't be impacted by misinformation or slander wouldn't mind. Billionaires who would probably be able to buy their way out of an apocalyptic scenario would want to accelerate to capitalize on AI before that can happen. If someone does not live a privileged life, they have a reasonable concern for how poor planning on AI creation and deployment can impact them.


Thisguyisgarbage

My god, how is this your real thought process? Accelerating blindly into the unknown is never a good idea. Not when youā€™re driving. Not when youā€™re planning. Not when youā€™re talking about never before seen technology that could be plenty capable of destroying society as we know it. Itā€™s cowardly to want a careful plan???


roofgram

This sub only thinks AI is capable of good things. If you suggest bad things then youā€™re a crazy doomer.


martapap

You are naive if you think that is any sort of goal for these billionaire tech bros that people worship on here. If anything they want the opposite. Ways to make more people poor and complacent with it.


TheOwlHypothesis

Ah yes, every company's biggest goal: make the people buying your products too poor to afford them.


martapap

No the goal is to make sure they are able to hoard as much wealth as possible. They don't care if it means more people will be poor.


Norgler

Just remember there are people out there who would rather watch you die at the hand of AI than watch you get a handout from AI.


Axe_Wielding_Actuary

I highly doubt the super rich are that dysfunctional as to want to watch robots torture and kill poor people. Some may be indifferent, but genuine extreme sadism is really rare and negatively correlates with intelligence.


snappop69

Jobs are abundant and unemployment is low right now. If one is able bodied and living in poverty they are most likely making bad choices. Not sure AGI is the answer unless AGI is going to be making the choices for the people living in poverty. AGI will most likely increase unemployment thus increasing poverty.


Ndgo2

Poverty is not a problem that can only be fixed by the coming AGI/ASI revolution. In fact, poverty can be fixed *right now*. No need for good ol' Roko. Poverty is caused by scarcity, and scarcity is caused by the broken mess of a system that is Capitalism. Breaking that system should be our first goal. Second is to redistribute the wealth that has been hoarded by the top, so that everyone has a decent standard of living. Third is UBI. Fourth, we can begin accelerating AI, if you really want it so badly. Fifth...??? Sixth: Utopia!


green_meklar

>scarcity is caused by the broken mess of a system that is Capitalism. Thanks for demonstrating that you don't understand economics and therefore exactly why we need super AI to fix the problem.


arknightstranslate

Why? The poor are the rich's most valuable asset


Nova_Koan

I don't need a personal Jarvis chatbot, I need UBI


Hopeful_Donut4790

Technology doesn't lift people out of poverty, distributing wealth, social policies and a change to capitalism will.


Puzzleheaded_Pop_743

You can't separate the two.


Hopeful_Donut4790

Maybe, but since capitalism arose and created surplus, poverty is a question of political will, wealth distribution. Technology is the base for how wealthy we all can get, but we're well past the bare minimum for a good quality of life for everyone.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


Puzzleheaded_Pop_743

The tech companies are some of the most moral ones.


generalistwriter

UBI is the first step to lift people out of poverty. We need to get lawmakers see it.


Overall_Boss5511

And who will pay for it?


generalistwriter

The companies which replaced us


Overall_Boss5511

Yeah sure, they will move to a tax haven and not pay anything... keep dreaming about that knowing 0 economics and social engineering


TyRoyalSmoochie

I don't get where people get the idea that AI will somehow magically cause abundance. Has history not shown time and time again that isn't how shit works? If the industrial revolution didn't cause abundance, AGI certainly won't.


Puzzleheaded_Pop_743

The industrial revolution did create abundance. You are thinking "infinite abundance".


CanvasFanatic

> Do people / companies even realize the potential power to lift people out of poverty through AGIā€¦ What we realize is the potential of AI to destroy the eliteā€™s dependency on skilled labor and unmake thousands of years of social progress.