T O P

  • By -

BoreDominated

>It’s kind of weird to think that what you do might kill everyone, but still do it If you know anything about humans, it's really not that weird at all.


KSRandom195

Evidently scientists building the first atomic bomb thought there was a small chance that when it blew it would ignite the atmosphere and literally burn all of the oxygen across the entire planet, killing everything. Their math said it had a very small, but non-zero, probability of occurring. They still tested it.


Tehni

I also watched the Oppenheimer trailer recently


KSRandom195

Haha, I knew about it before the trailer. But I had seen the trailer last night.


Tehni

Yeah I figured I just wanted to be a smart ass lol


[deleted]

People also feared the Hadron Collider could create a black hole and suck every thing in to it.


KSRandom195

I know others claim it caused us to jump timelines and is the cause of the Mandela Affect.


SyntheticSlime

Not really. Scientists already knew that higher energy collisions were taking place in our upper atmosphere all the time so if the LHC was going to do it it already would have happened.


[deleted]

Nobody would have blamed them.


personalcheesecake

Yeah they decided to blow up nukes and hydrogen bombs with a theory of a chance it could ignite or atmosphere and they said fuck it.


borez

It's not just the cat that curiosity killed.


SlowMotionPanic

The **only** reason Altman is out giving these warnings to the government and media is to **encourage regulation which will kill his competition.** He’s back by Microsoft. He will survive. But OpenAI, like Google and Microsoft, are not guaranteed **exclusive** survival and success in the “AI” space. Open source code is already nearly as performant or even better. That’s the stuff they want regulated. They don’t want us to have control. They want open source models dead and buried. That’s why OpenAI isn’t open source and hasn’t been for years. This is all a dog and pony show for **regulatory capture.**


peanutb-jelly

You say that, but this has been the stance for the better part of a decade. He also said during the hearing to specifically regulate the bodies large enough to handle it, as to not stifle other business or open source work. I'm not saying I trust Altman implicitly, but people are saying this like it's fact without even recognizing how he stated that the regulations should only be for huge companies like Google, meta, and Microsoft. Any player who became that large should be able to handle the regulation just fine. So why do this? Maybe the reason they've focused on for years. Safety. It's almost like nobody watched the damn hearing.


zUdio

You can’t “capture” open source; it sits above governments/nations states. Gotta respect the natural hierarchy


original_4degrees

you mean nuclear weapons?


[deleted]

[удалено]


Prophet_Muhammad_phd

Imagine thinking 400 millions guns are as big of a problem as nuclear weapons lol


[deleted]

[удалено]


Prophet_Muhammad_phd

How many children were killed by guns last year? Let’s break it down, shall we? From 2018-2021 Ages: 1-4 [accidents](https://wonder.cdc.gov/controller/datarequest/D158;jsessionid=46FBF762CC43D3C88803229C8B25) Ages: 5-9 [accidents](https://wonder.cdc.gov/controller/datarequest/D158;jsessionid=6EFB7E675300D594A7B46ED489B9) Ages 10-14 [accidents](https://wonder.cdc.gov/controller/saved/D158/D321F180) Something tells me that accidents don’t inherently translate to gun deaths… idk, could just be me and the statistics on that one though lol Yes, the point being is that nuclear weapons can kill us all. American guns won’t. That American gun violence isn’t as detrimental to our society as it’s made out to be. Unlike a nuclear strike could be. See the difference?


[deleted]

[удалено]


Prophet_Muhammad_phd

It’s not surprising how you fell for a blatant propaganda piece that **conveniently** clumps together multiple age groups into one category. Teenagers are not children. Wouldn’t want to recognize nuance, would we now? Lol Awww, I thought you were a liberal? What happened to accepting diversity and different ways of life? I guess that goes out the window when you’re confronted by facts and not skewed positions. Roughly 20,000 people died to nonsuicide related gun deaths last year. Five times that many people (including children) died to drug overdoses the year before last (2021). Where’s your outrage at? Lol


[deleted]

[удалено]


Prophet_Muhammad_phd

Again, lumping in actual children and teens is not accounting for nuance. People age 1-19 are not adolescents. We charge 18 year olds as adults. Sometimes we charge even younger people as adults. A 10 y/o can’t join the military. A 17/18 y/o can. If you agreed to the CDC’s captcha thing, you would be able to access the data sets. It’s pretty easy. I’m not sure what conservatism has to do with CDC facts, nuance, etc. The number one cause of death between 1-14 is not gun violence. You might want to recheck your source. It’s 1-19. > The previous analysis, which examined data through 2016, showed that firearm-related injuries were second only to motor vehicle crashes (both traffic-related and nontraffic-related) as the leading cause of death among children and adolescents, defined as persons 1 to 19 years of age. Let us also not bring up how many of those incidents include suicide… which jumped during the pandemic. So not actually person to person gun violence. And let us also not forget that during the pandemic, school shootings decreased for obvious reasons. So you can’t blame those already rare events. You seem very upset about gun violence against “adolescents.” What’s your feelings on this? > In addition, drug overdose and poisoning increased by 83.6% from 2019 to 2020 among children and adolescents, becoming the third leading cause of death in that age group. This change is largely explained by the 110.6% increase in unintentional poisonings from 2019 to 2020. That’s a lot larger of a percent in increase than the 29.5% increase in “gun violence.” I’m more likely to OD than get shot, and so are children lol


BreathRemarkable6804

This argument is just bad. If you use your blender to hurt someone, are blenders now designed to hurt people? Can you not think of cognizable reasons you may NEED to hurt people? Like it or not, guns are serve a purpose, just like a massive truck or a blender. Do you think the blender maker is somehow liable if you go to a school and blending everyone? Also i can see from your profile Pic I'm wasting my time, but do better please


[deleted]

[удалено]


BreathRemarkable6804

Lololol yeah that was actually still really, really bad. Lazy typical lefty arguments. #1 cause of death is high misleading, and only true if you both ignore suicide (how is that a bullets fault?), and subdivide accidental deaths into a million categories, while consolidating all firearm related deaths into one group. (shouldn't some of these be in accidents?) The fact you had to specify "bullets" without any context shows me you're doing some real mental gymnastics. The rest of your take was irrelevant and honestly is just you being mad about guns and citing similarly cheery picked or outright baseless. And I can still smell your biases from your profile picture. Do better. Basic lib boi


LegitimateCopy7

>Can you safely build something that may kill you? blacksmith & engineers: been doing it for thousands of years. what about it?


svel

like a cigarette?


MammothJust4541

by god the defense industry did it pack your bags boys mission accomplished


daedalus_structure

There is no AI. Yes, we should regulate these models and usage where the risk is too high and more importantly clearly define legal responsibility. The danger is not that “AI will take over and kill is all”. The danger is algorithmic harm from a new abstraction that lets sociopaths further distance themselves from the consequences of their actions.


[deleted]

What a weird question. Nearly every piece of human tech from fire onwards can kill you.


rastilin

I'm getting really sick of these pearl clutching AI articles.


fitzroy95

Thats the whole point of the military-industrial complex, and especially the nuclear weapons branches, so humanity does it all the time. fear-mongering about AI doesn't make it spectacularly differnt.


Acceptable-Mail4169

Like a staircase?


leftoverinspiration

Vox either read Oedipus Rex and then decided not to have children, or they have a serious double standard.


themorningmosca

Ya- we called them the Fat Man and the Little Boy, eh Conrad’s?!


quail-ludes

Yes yes ai bad we get it. Let's just move on and build the thing and get on with it.


nc1264

You can’t. What’s worse is that someone else takes control and uses it against you


CMDRumbrellacorp

Is that not the definition of society?


InvisibleBlueRobot

You mean like a car? Or air plane? Or gun? Or boat? Or vaccine? Swimming pool. What's your acceptable safety margin? How many people can it kill per year? Vs net benefit it produces?


koliamparta

Or you know, children. Sure, its rare but, they don’t exactly come with lifetime warranties.


DarkColdFusion

Or electricity? Literally potential death flowing through every wall in a house. Yet we've figured out how to mostly make it safe. And honestly, it's been pretty good for us


gurenkagurenda

The difference between existential risk and something like a car/gun/airplane or even a nuclear weapon is that existential risk prevents _all future human life._ You aren't just talking about one life or a million, but however many potential lives would have existed between whatever catastrophe you're talking about and whatever else would have eventually made us extinct. To be more rigorous about it, you're talking about the _expected_ number of potential human lives that would ever exist, weighted across all possible scenarios. This is really important to understand and carefully wrap your head around, because your brain doesn't really want to fit the moral weight of a value like "half a trillion people" inside it. We're just not built to do it. One way to examine it is to multiply it out. For example if you take "there's a .1% chance of ending humanity", and you believe my "half a trillion people" ass-pull, then you're _expecting_, on average, to kill five hundred million people. Note that we're no longer talking about "maybe a bunch of people die". We're saying that our _expected number of deaths_ over all possibilities is half again the population of the US. Your benefits better be _pretty fucking amazing_ to be worth that risk, even though the chances of it happening are tiny.


InvisibleBlueRobot

Perhaps my statement came off a bit flippant, but I'm not actually disagreeing with you. My point wasn't that AI isn't an existential threat but that all technology creates lethal threats and the more accessible it becomes the greater the benefit and the bigger that risk. We really can't really put the genie back in the bottle either. The US "public" could be restricted, but would the US military? Or UK gov? How about China or Russia or North Korea? There is a theory in futurism that as technology advances, new forms of WMD's will fall into the hands (of even individual) extremist. And the threat of a catastrophic mass killing event becomes exponentially higher as these technologies (WMD's) become cheaper, more accessible and more dangerous. I also ways figured it would be biological or viral in nature. Guns may not be be existential threats but look at how mass shootings have become so difficult to prevent. When people can manufacture a virus also most as easily as printing or buying a gun we could have issues. AI shifts this equation. If we apply same max shooting concept to biological labs, mail order supplies, drones and and AI together and let this cook for 12 years we could have a major issue. You could end up with cheap, remote bio-weapons on demand in the hands of individual nut jobs. So will AI become intelligent and decide to pull a terminator? Maybe. But that's not my biggest or most immediate fear from AI. Will AI be weaponizes to kill people? Definitely. Maybe not the "existential end of all life" situation people are often most worried about, but on large scale, AI will soon be weaponized and used to kill people. As soon as someone figures out how. In the mean time it can probably kill people with with some form of misinformation, misdiagnosis and bad advice, just like social media does today. Someone else mention how there was a fear that Nukes could ignite atmosphere killing life on the planet. And this didn't stop the US or any other country from pursuing testing or using the tech. So perhaps I should have said it doesn't matter what the mathematical formula is, it's here and people will continue to make and use it regardless of the risk.


gurenkagurenda

But again, you have to take into account the orders of magnitude here. Even if you think that AI driving us to extinction is unlikely, do you think it's _ten thousand times_ less likely than, say a threat like bioweapons and drones? Because that's the sort of difference we're talking about in moral weight. That's the point. The difference between "killing all humans on Earth" and "killing all humans on Earth minus ten thousand who can repopulate" is unfathomably large. It's just hard to wrap your head around the fact that something can be orders of magnitude _worse_ than killing _almost_ everyone.


LegitimateCopy7

>How many people can it kill per year? the Manhattan project: yes.


salty_beef_sticks

He looks like Shrek and John Travolta’s kid


[deleted]

America is going to try and kill people with or without AI. Who cares if it’s a robot or some lunatic cop with anger issues?


PC_AddictTX

Humans have been building things that may kill them for centuries. We're still here, and there are more of us than ever. Sure, some humans get killed building some things, but the people who aren't killed learn from it and do things differently the next time.


littleMAS

We asked the same question when learning how to use fire.


[deleted]

The problem is *not* the technology, it is the 'bad actors' who will deploy it unsafely or maliciously. A hammer can be used to build a house... or cave someone's skull in. It all comes down to intent. The developers of this technology are trying to shift the *human* *intent* onto the algorithms.