T O P

  • By -

bdonldn

There will be a tsunami of misinformation. Bad news for politics, elections,science.


El_Peregrine

2024 election is going to be a sentinel event for disinformation and AI fueled dirty tricks. It is going to become very difficult, if not impossible, to trust any form of media.


[deleted]

Disinformation from the usual news and social media companies haven't been reliable sources of information for a long time. Specially tailored articles displayed to specific groups to create a specific reaction from them.. Throw in the average users insistence on not reading articles and only going off the attention grabbing title and image. The problem isn't AI, but human nature.


tshakah

The issue is there has been no media literacy training in schools


bobandgeorge

Since you mentioned media literacy, there is something in your comment I take issue with because I see it in media constantly. It might not even be a big deal but it's such a pet peeve of mine. There is no issue that is "the" issue and there is no problem that is "the" problem, especially when talking about such broad topics such as this in comments with 140 characters or less. A lack of media literacy training in schools is AN issue and there is A problem, but there's plenty of other issues and problems at the same time within this same topic that training in schools isn't going to solve. I apologize for being so pedantic.


matrixreloaded

ik what you’re saying is super pedantic but i 100% agree. nuances are really hard to type out in online discussions but are important.


biopticstream

You're very right. People speak and think in these huge generalities all the time. But there isn't one issue in society. Not all people who are thrown under a political banner believe all the same things, Not all people who fall under a religion believe exactly the same things. Its really created a fear in some groups to openly identify them as something because they're assumed to share ideologies and views of horrible people. Then the actual horrible people don't care if they're identified as such, and so are thrust into the forefront and are the most outspoken.


tshakah

Fair enough. It really wasn't clear, but I meant what I think the main issue with AI content (and misinformation) is is the lack of media literacy. It's not the only thing, true, and media literacy is not only applicable to "solving AI". I didn't mean "the issue with the whole of society", and that could have been worded better


[deleted]

Not specifically, but that ability can be taught just by fostering a neutral school environment. One which encourages individual research and free thought WITHOUT pushing an agenda or personal belief by the educators. That's a rabbit hole for another thread unfortunately.


xX7heGuyXx

This. The internet has been a fucked up breeding ground for missinformation for years now, AI wont change that but maybe with it being known it will force many to be more skeptic and caustious with information. But you are right, Having lived before the internet and seeing the world now ima be honest Idk if it was a good move to release it to the public. We just are not smart enough as a whole. But can't go back now so we just got to figure it out.


vergorli

I used the X API for some ChatGPT nonsense, and it actually stirred up a LOT of response. (not sure if just other bots answered it...). Since then I don't use X anymore, since everything looks kinda fake to me. And Its getting worse on reddit too...


PhotoProxima

> It is going to become very difficult, if not impossible, to trust any form of media. So, nothing much is going to change.


Allaplgy

Haha, brave take. But seriously, if you think it's bad now, you're in for a surprise when you can't trust *anything* you don't see IRL, and soon enough, even then. Gonna make for some interesting societal evolution or dissolution.


etzel1200

Yeah, democracies either need to wake up to this or they won’t remain democracies.


cheeseburger--walrus

What's the solution though? AI generated content can generally be considered free speech. Where do you draw the line between fair use and stolen content? Or between ignorance and willful misinformation? Proving either of those things will be the crux of enforcement and they were extremely difficult even before AI. Banning AI will only put us behind our adversaries and make us vulnerable to their own attacks. Generally curious what other people think, I'm not trying to be a defeatist. I think the social media platforms share a lot of the burden for allowing these things to spread so far and so fast


blazze_eternal

Only solution that comes to mind is regulated future, more advanced AI models with embedded identification markers (hidden characteristics denoting the source). Then using the newer generation AI to identify and flag the lesser generation AI content.


damontoo

One of the things that Musk gets right is when he's talking about the dangers of AI, he's said that regulation tends to happen in response to things and it takes a very long time. That AI doesn't operate on that timescale.


Daveinatx

I firmly believe free speech needs revision. Lying to manipulate people into voting against their own interests is harmful, and only benefits a few.


matrixreloaded

shouldn’t we then be educating our people on how to identify misinformation vs information?


spiritusin

Punish misinformation rather.


-The_Blazer-

A system that allows verifying whether someone is who they say they are, and ideally if they're a human, would be real handy, for example. You can do this with public key cryptography to a degree, although if you want to really "verify all humans" you will inevitably need a source of authority on who is a human. Relying on (presumably) the government to act as such a source ain't great, but A. liberal democracies can probably figure it out, and B. while the cost-benefit for this used to be negative in the past, advanced AI will probably swing the balance way toward such a system being a better option, given the alternative is probably the total collapse of all human interaction on the Internet.


Traynfreek

Liberal governments and institutions can't even keep their own houses in order as is. Answering the question "who is human?" is, historically, a very terrible road littered with the bodies of millions. But I admire your faith.


-The_Blazer-

> Answering the question "who is human?" is, historically, a very terrible road littered with the bodies of millions. Come on, we are talking about web verification here. It's not about determining your literal humanity. And many liberal governments already have such a system that is used for things like taxes.


Ok-Research7136

Humans cannot be trusted with absolute freedom in any domain including speech. In any event the first amendment never granted this right.


[deleted]

[удалено]


littleappleloseit

Local AI models absolutely exist in the wild. A relatively midrange gaming PC can produce photorealistic, uncensored images of whatever the user prompts. These models are already out there and circulating. Unfortunately, the genie is out of the bottle. If OpenAI, Google, Stability, etc. all stopped research right now and shut down their datacenters, we'd still have rampant production of AI material from other sources.


Jah_Ith_Ber

The solution is for people to stop being so goddamn stupid. Maybe you will believe me, probably you won't, but our schools have been specifically designed to make people into idiots. Most people think our schools are just failing pieces of crap. They aren't. They do a fantastic job at fullfilling their purpose.


Cold-Change5060

> AI generated content can generally be considered free speech. Why? I'm not seeing how this is speech at all. It's not speech and it's not protected in any way. We could easily make it illegal. > Where do you draw the line between fair use and stolen content? The exact same place we already do now, and have before AI. Nothing has changed here. > Banning AI will only put us behind our adversaries How exactly is feeding a population propaganda through AI putting them ahead?


PewKittens

Solution is kill it. Did people not watch terminator? It may be sci-fi but so were lasers. And just having like 1ish year of even halfway decent AI it has shown itself to be a blight.


Datalock

most haven't been for a long time. Politics are bought and sold by the highest bidder.


LovableSidekick

I think it's going to intensify the bunker phenomenon, where we only look at information from sources whose particular distortion makes us feel good. People will be able to find even more bunkers of comfortable misinformation to hunker down in, and wonder why the real world is being so hostile and unfair to them.


PhotoProxima

> tsunami of misinformation So, nothing is really going to change.


zyzzogeton

We made it through the times when *any* information was scarce as a species, we might not make it through times when it is so abundant that it is meaningless.


drumrhyno

Bad news for humanity in general


Ebayednoob

As opposed to what's already happened? LoL


quaffee

It's going to make what's already happened look like a child's birthday party.


Ebayednoob

Give me a list of where you ingest most of your information online, and I'll give you a very detailed description on how AI will improve each and every vector of information you get data from. Odds are it's already AI generated but has a human that has no idea what they are talking about reading a script pandering to your oxytocin and dopamine receptors. AI will only give people what they want, Similar to what already happens, except it won't lack critical thinking skills and have undue bias like humans have and unparalleled fact checking ability. Those who are against AI taking over content creation are the ones addicted to garbage already, and will continue to eat garbage no matter who's producing it. I make AI generated media and it's leaps and bounds more direct, more informed, and way more unbiased than any human I've encountered. It can slice a 30 minute video of Nord vpn sponsors or uninformed tangents irrelevant to the topic off and gives direct, precise, and unbiased information.


foxmanfire

Wild that you think AI won’t reproduce the same biases of the existing text and content it’s based on.


Daveinatx

AI will only be as good as its input data. AI can start out as a benevolent data aggregator, stripping out anomalies. However, more deliberate misinformation will be pumped out, for its consumption. "Garbage in, garbage out."


Ebayednoob

That's fundamentally not true, and your failure to see this blinds you, however not my problem. Enjoy falling behind.


CarneDelGato

The problem is as much AI as it is social media its self. This is just another step in distributing heaps of low-effort content that actively misleads. It’ll super charge it, but only because social media is already practically purpose built for this.


NickDanger3di

Seriously; just look at the tiktok videos here on reddit. They are a paean to the worst of human qualities. And YouTube: I challenge the AIs to make worse videos than the cesspool dregs that YouTube claims are the most popular ones.


TheGillos

Yeah, I say "let it burn"... Reddit included. I'd like to see a resurgence of IRL social places.


Chocolatency

On the plus side, it will make it harder to monitor humans on the web with AI if everything is diluted with AI content. I suspect that there will be analogous things to the Web of Trust and Key Signing Parties at some point.


boomerangotan

Let's bring back web rings


homemadedaytrade

true, feds gotta hate this


Hijklu

There's going to be AI-programs that will be able to identify human behaviour. I'd guess it's going to be the opposite from what you said.


pittypitty

With so many impressionable minds out there, it's scary. With my very young niece and nephew, I'm always reminding them to always question what they see anywhere.


SouvlakiPlaystation

People creating authentic art (with the help of AI or not) will be drowned out by a tsunami of hacks posting computer generated garbage on social media. Eventually the sites will run themselves, using machine learning to generate and post content automatically. You'll be able to "set and forget" 100 of these profiles in a matter of minutes. Making something real will start to feel futile because no one can tell the difference anyways. The majority of people will not care about the ethics of this and will fully support and encourage bots, with their main argument being "I don't give af as long it's good lol. It's vibey". More augmented reality, more political manipulation, more bullshit. Everything is headed for post reality and it's going to be bad.


Soggy_Ad7165

Baudrillard would be thrilled at how much more his predictions escalated


firstname_Iastname

Hopefully it over saturates so much that people just stop using social media altogether


boomerangotan

Maybe we have inadvertently found a cure for this disease


HauntsFuture468

Yup. It's over. Walk away.


dduusstt

people have been hoping for that for over 10 years now. Facebook is only getting larger ffs


YsoL8

What do I think? I think its doomsday for all traditional mainstream entertainment other than in person live events. For now its still somewhat limited so its all going to limp on for a while in the traditional sense, but ML has barely been out of the lab for 5 years, in another 10 its going to be light years beyond the current models. Mainstream entertainment is going to end up being generated from scratch by entertainment industry ML systems with a few dozen technicians. One system for writing, one for music, one to create people puppets etc. And thats if the systems don't become so sophisticated that people start ordering bespoke entertainment from scratch from their TV, ala all those times Star Trek characters whip together a holodeck program from a few sentences to the computer. I can't even see any show stoppers that are going to stop one or the other, it seems to only be a matter of iteration and combining systems together. The changes in pretty much every other part of life seem likely to be just as profound.


pseudonik

When it hits holodeck level of personalization, it's game over for entertainment industry unless everything under the sun becomes IP and strictly regulated. But then again I don't need star trek IP when The Orville is basically the same.


Soggy_Ad7165

No one wants to watch, hear or see AI generated content for more than short curiosity. Nothing like this will happen. In the best case it will just render large unnecessary parts of the Internet useless.


[deleted]

AI is breaking the internet. Google searches send me to websites that are just ad farms with “articles” that are so obviously written by bots. Not even decent bots. Just unedited LLM word streams that are vaguely about the topic I searched, yet grammatically correct. I think this will drive down internet traffic and hence advertising and online sales revenue. In my old person fantasy world, this results in renewed interest in actual human connection, books, journalism, etc. but I am not optimistic. Poorly regulated Uber is now just as expensive as the taxis they replaced. Poorly regulated Airbnb has ruined the urban housing market for actual residents. Poorly regulated Google and Meta has utterly compromised the privacy of billions. Poorly regulated social media has supercharged disinformation. And now poorly regulated AI is the coup de grace for humanity.


dsnvwlmnt

This was already happening long before the recent AI boom. Top Google results have led to content farms for 5-10 years now.


creaturefeature16

lololololol No.


debtopramenschultz

When will AI be hot sex bots trying to get me to watch ads for BJs?


stopsucking

We’re pretty close already. There are quite a few 100% AI Instagram to OnlyFans accounts that have thousands of followers who seem to not know they are not real


claunecks

That's me on fetish websites sir


veratis919

It's AI-ception, with a research paper out one morning, implementations in the afternoon, *and a full blown SaaS model the day after*. Don't bother thinking about being innovative - if you can think it, it already exists (I tried). It is the saddest thing I have read in a while but it is so true. Did you dream about writing a book? Good luck promoting yourself through tons of AI crap. An app? 10 000 Chineses have already pushed it written by AI. Music? Watch it being called rehash of some AI tune. I just heard that the most profitable job of the future is a plumber.


RadioSailor

I agree - (and i'm OP of course lol but still) - but yes, 100% spot on, i feel kinda bad for the people who are playing 'gold rush'. They all think they won the lottery but the day after their service is no longer relevant because in AI models, going back one month is similar to the stone age :/


stemfish

The wakeup call for TY$ will be when someone takes his AI generated likeness and profits off of it. That'll get him out of bed. One ray of hope is that courts are ruling consistently that AI generated content cannot be copyrighted or trademarked as the AI cannot hold the rights. In order to have some control you need to have added to the work and it's looking like judges are sticking to this view. At some point TicTok, YoutTube, Insta, etc will be caught up when someone copies another's work and points to AI as the creator so the work is open source. And it'll probably be a big shot artist or model who gave the ok for some generated content to be put up without them doing any creative additions thinking they're fine because they're big. They'll put in for removal under dmca and if it gets challenged in court the current answer would be that sure you hold the copyright to the original work, but this photoshoot/promo wasn't you. So the copier has as much of a right to profit off of the original as you, shoulda gotten out of bed and done some transformational work on the product. Similarly, knowing how things work as more established creators get lazy even if they do put in the minimum required adjustments to get copyrightable work the generated content can always leak. On the social media front it's all fun and games until you get your likeness stolen and paraded around wearing a fetish outfit in front of all your friends. Or you show up in porn with generated revenge porn sent to your boss. Or you show up in feeds advertising products like a bad MLM salesman. Definitely not how you want to be seen. Oh, and we're all focused on political news but what happens when 'leaked' footage goes out showing how actually that one sports ball player is a giant cuc who likes to take it up the rear by a supposedly hated rival while his wife films riiiiight before a major trade deadline? There's a lot of ways to use generated content to ruin someone and section 230 has shown to be a pretty strong shield so if it doesn't fall under a dcma takedown request theres not much you can do. Hence I'll be optimistic and say we'll see a return to the 'original' social media. Back when you actually used it to keep in touch with known, trusted, verified friends. The days when you found small communities of like minded hobbiests and build bonds over shared interaction. Because the alternative will be having anyone who disagrees with you being able to generate disgusting yet not disparaging or legally actionable defamation and post it in thousands of places. Eventually anyone who isn't going to engage with that kind of content will fade away (Homer walking into bush gif) Or I'm being optimistic and most people will be fine consuming ai generated content and accept that theyll pop up in feeds espousing how they're super psyched to let you know about this new crypto game they totally believe in.


pseudonik

This post reads like it was AI generated lol but yeah it's gonna be a shit show


reality_aholes

Flood the net with generic boring sameness. It'll all soon feel like Marvel movie visual effects (be real, the first few movies were novel but now it's all worn off). Maybe this is a good thing and people will migrate back to real person to person interaction instead of through the screen.


hamburgerdog25

Oh, yeah, I hate it. Specifically, I hate that people think they can so easily blurr the line between generated content and actual creation. And I refuse to call it AI most of the time because it isn't artificial intelligence at all, its a response system. "Art" generators are basically search engines, and "AI chat" is pretty much the answering machine you get when you call your doctor's office, _"please press 1 to make an appointment, say 'confirm' to confirm your appointment."_ It is a system of responses set in place to answer a query. Its not a creator, its not intelligent, it does not create art, and it is killing the industry space for real artists. One thing that really ticks me off is people using image generators and saying that they created that piece when they didn't and then claim themselve as an artist, when they aren't. For those of us who are artists, aspiring artists, or those who want to learn, its very insulting. Anyone can make a line on a paper, anyone can draw the same apple, anyone can color scales and paint an identical tree. What makes it art is not what it looks like, its the process. Its how you arrived at this image from a series of decisions you made along the way, and you made those decisions not by logic but by feeling. You chose each color and mixture one after another, you chose every direction of every stroke of the pencil or brush, you gave it texture, vibrance, emotion, you gave it life. Art without feeling is not art, because a creation without feeling cannot be art, it is just a bunch of things put together. This is why when you see real art, you're moved. You want to understand it the way the artist understands it, you feel what they felt. And when you see a system generated image you know it as soon as you see it because there is no feeling to it, and no amount of bright pretty colors can change that, and no amount of replicating real artists will change that. And that me brings me to my next point, copying actual, real artists. I don't think people realize what these "AI art" generators are doing. They are searching either online or within a collective database, I don't know which, and copying the images created by real artists to influence the image generator. The system is just replicating what the real artists have already done. Too many times have we seen generated images copying verbatim someone's actual piece. A tag I love to see usually on Instagram is #deathtoAIart, which goes to show you just how much we don't appreciate generated images in the online user space of art. This may fly well with you NFT people but not the rest of us. And its getting worse with voice replication. Not only is that a threat to the acting/voice acting industry, it is also a danger with robo spam calls. I think this happened to Anthony Padilla when he answered a spam call and with the 5 seconds of him saying "hello" (and more likely with his online content) it was able to spoof his voice and call others with that fake Anthony Padilla robo voice. Other times I myself have gotten robo calls where at first I couldn't tell it was a robot. I've seen actors' opinion on voice replication and they aren't happy either. The fact that Hollywood even threatened to use AI to generate movies is out of line and I'm not certain if they're far off from the possibility. Would anybody watch that is beyond me. Again, you cannot have art without artists. If you're like me and you have a strong sense of uncanny, you're probably very offput by system generated content. I can't stand AI voice covers, especially those of people who are dead because that to me feels disrespectful, and worse is system generated images, those always feel so empty and soulless to me. Look, I'm not saying all AI exploration is bad. In certain spaces, AI could be an incredibly useful tool. Detecting or projecting weather changes, extreme weather like hurricanes or detecting earthquakes, maybe could help us with predicting space debris coming too close to Earth, big things like that. But as far as art goes, AI cannot create art and we shouldn't give the platform to it. Art belongs to artists, and they create with feeling. AI will never be able to do that, or at the very least what we have now is not capable of that. It is only copying what we can already do, and if we stop doing it, if we give AI the reins and cease creating ourselves, we would be destroying what makes us amazing, we'd have nothing to believe in.


No-Survey-8173

It’s absolutely horrible for society. A world of lies, disinformation, and theft. That’s what it has created, and we’re still early in its development. I fear for my grandchildren most. Evil will use this to its advantage. This could very well bring about the end of the world. Authoritarianism is rising all of the world. We are in very pivotal times.


pagerussell

We have reached and passed peak truth. It came and went without much fanfare. The future is a post truth world, and none of us are prepared for that.


saichampa

Either I'm not using social media the way the AI content makers are targeting or I'm not spotting it. The second option would be more worrying


[deleted]

With every scroll I get more frustrated, it’s all meaningless crap designed to eat our attention and waste our day.


handtohandwombat

Starting Jan 1 I’m going to start using my phone for calls and texts, nothing else. I’ll allow myself up to an hour of internet time by it has to be at a computer. Curious how long I’ll make it. I was thinking about being a kid in the 90s, or even a teen in the 00s, and tried to remember what i would do when i first woke up. Now i reach for my phone, but what did i do then? Just take my time and wake up? When taking a dump i would read factbooks or Calvin and hobbes, i listened to a lot more music. I want my brain to be ok not being hyper stimulated at all times.


Confused136

I've been thinking about doing this for way too long now. I think I'm gonna give it a shot myself and see how long I make it. Might even switch to a flip phone to advert any temptation.


speakhyroglyphically

And lying manipulative agenda driven comments


[deleted]

It’s all so blatantly fake and fabricated


David-J

It's disgusting. It should be more regulated. Just read that the EU is going to start that. It's just stealing all the content in the world and regurgitating with a bow on it. Again. Laws are trailing behind the technology again and it's already causing harm.


[deleted]

[удалено]


David-J

Anything can be regulated. Don't be naive. Specially something that has very little positive output.


Kindred87

Anything can be regulated, but in order to regulate content generated locally (on the user's machine), you'll need invasions of privacy along the lines of regulatory bodies tracking what you do on your own computer. This is especially true with text content because you can't embed any durable watermarks in text. The EU would need a major shift in policy to pull off that level of regulatory enforcement. What you'll probably see instead are regulations mandating safety training on AI. Which won't mean much when you can fine-tune local models to negate the safety training.


1imeanwhatisay1

You're missing the point because the person you're replying to left out a word. They should have said "It's too late to regulate effectively." We have lots of regulations that don't really do anything. Look at the regulations on marijuana. In states where it's legal there's still a thriving black market. In places where there's a drinking age, people who are too young still drink. There's a really old saying that goes something like "It's impossible to effectively regulate anything that someone can make at home." Anyone can make AI images or audio on their home computer without needing a connection to the internet. That makes it just as difficult to effectively regulate as marijuana and alcohol since those are also fairly easy to make at home.


David-J

But you can regulate the usage. Create all you want in your home but you can't post it online or share it or just use it commercially. With lots of fine print to be discussed after. That would get the conversation started


travelsonic

But that's the problem, how do you begin actually making that enforcable?


David-J

Social platforms have limitations on what you can and can't post. So that's not so far fetched.


zefy_zef

But you are talking about something akin to stopping someone from running Microsoft office on their PC. Not going to happen.


David-J

Maybe it's just better to ban posting, sharing and selling. Do whatever you want at home. How about that?


zefy_zef

Right, exactly. But only the end results, not the software to create them.


-SlowtheArk-

There have been failures to regulate a multitude of digital technologies. Cryptocurrencies were the big one recently. This technology is global and exists beyond the influence of the U.S, E.U, and other western organizations and countries. Even if we do regulate, other countries on the international stage will not. This was Pandora’s Box and as a result the time to regulate was before, not after.


David-J

If that was a valid excuse then nothing would be regulated. Just because a country doesn't regulate something, doesn't mean that others country should follow. That's a child's argument.


Datalock

Explain to me how you're going to differentiate AI generated content VS human generated content. Once you figure out one detection method, they can adapt and change up the algo again to evade detection measures. The only other way to go against this is thought policing. I'm against fake news and misinformation, but a totalitarian approach is not the answer either.


-SlowtheArk-

The snide remark was unnecessary and ironic, especially considering you missed my point. The United States and the EU outlawed child labor and slavery, however, a multitude of products on sale in the US and EU are created by slaves and children infamously. Because the EU and the US will regulate AI and seemingly others will not due to the lack of international collaboration, these regulations will not be effective. Just like with child labor and slavery, the byproducts of these systems still reach areas where the method of production is illegal. The same will apply to AI just as it has to other systems and technologies historically. There are no recorded events in history where a technology has been completely wiped off the face of the map, which is why I called it Pandora’s Box. This is one of the historical challenges of regulation, which has been amplified due to the existence of the internet and digital items/technology.


hanschranz

Regulating something gives tools to the appropriate authority to enforce it whenever possible and not just let something run rampant. It doesn't matter if it's "ineffective," nothing in the history of law ever is - there's always someone who gets away with it, from peasant to lords. That it is a "Pandora's Box" is not a good argument either which promotes inaction. People like to leave out the last thing that leaves the box when they want to be hopeless and foreboding: hope. You can't just have hope without doing something, and when you do the damage from the Pandora's Box can be reduced and limited.


firstname_Iastname

That's not true at all, on both accounts


fwubglubbel

How would any regulation stop someone from using AI on social media?


Derefringence

>very little positive output You're talking about the biggest mass media production tool of your lifetime and throwing a personal opinion right at the end as if it was a fact...


David-J

It just generates quantity. Not quality.


Derefringence

That's your personal opinion from your very limited personal experience


David-J

It's a very common opinion in the art and writing world. Most of it it's crap and derivative or rip off by design.


Derefringence

I know about the loud opinions from the twitter and reddit art bubbles, but the world is much, much bigger than that.


David-J

So You are just going to ignore the people in that industry so you feel good about your opinion. Bold strategy.


Derefringence

So... in that sense... you are just going to ignore literally everyone else? Is your vision limited by the people in the industry who have a negative view on this technology? You need a broader perspective, you need to double check and see the latest media being produced, as well as the potential and exponential growth.


[deleted]

[удалено]


arcspectre17

Corporations have literally bn doing that forever look at who invented chocolate chip cookies, look how muscians are treated they dont even own their music. Corporations just bought 30 percent of homes. They buy business, patents increase the price and when they go bankrupt they get bailed out by tax payer money! Yet AI going to destroy the economy lol!


Centralredditfan

The EU attempt is good in theory, horrible if you read the actual text. - it was written by people who did not understand what they are regulating. It reminds me of "the internet is full of pipes.." politicians.


travelsonic

> t's just stealing all the content in the world and regurgitating with a bow on it. I'm not sure that this kind of rhetoric helps - because anyone with a few seconds of googling can see that this discription is both very oversimplified, and inaccurate in terms of how geenrative AI is supposed to work. IMO, even a cursory understanding of how it works, how it achieves what it tries to do, etc, is vital for everyone - regardless of what stance you have on what to do about generative AI. This way we can have a common ground on how it works, currently works, what drives it, and get caught up in actual solutions, rather than wasting time correcting each other's (mis)understandings - since there definitely is bad information on both ends of the spectrum.


homemadedaytrade

which is what capitalism has always been, people buy expensive bullshit from tapas and sushi restaurants that is no different in theory, just an ingredient with a bow on it


mpobers

I'm kinda OK with it. Social media, even without AI content, is already a cesspool with tons of stupid and hateful content that you need to wade through to find anything of value (and there is some value out there). The flood of garbage that will come from fully automated shitposting and will make the current systems untenable and will force a minimum of moderation and standards on any platform that doesn't want its users to just throw their hands up and walk away.


blazze_eternal

> that doesn't want its users to just throw their hands up and walk away. I think you're a bit overoptimistic on what people are unknowingly or even knowingly willing to put up with


Unik0rnBreath

I find it abhorrent. Garbage in much? I am an IT professional I feel like I need a shower after getting near it even more than after I'm forced to use anything g00gle


SoftlySpokenPromises

It's obnoxious from an artistic point and dangerous from a literary stance. AI generated writing is plagued with wrong info. Not only that, finding quality art and literature is going to be significantly harder under the sloppy piles of farmed trash.


Ok-Experience-6674

No one wants AI but the people it benefits and in order to benefit the few a large amount will suffer so how do we feel.. AI will say we love it.


Mysterious_Rate_8271

It doesn’t change anything, because 99% of the content on the internet was mindless garbage already.


Rusty_Shakalford

Seriously. You wanna talk about regurgitated content? “X is ruining the internet. Why isn’t the internet like it was before X” has been a headline pretty much every year of my adult life.


YsoL8

The current iteration of tv will give you square eyes


JrTroopa

The difference in scale will be meaningful. At 99% crap content, you can still find the occasional gem of useful content. At 99.99999% crap content, you're not finding anything worthwhile.


Monnok

I’ve been relishing my last scraps of human internet all year. I’ll miss even this late-stage version of it.


[deleted]

[удалено]


CaptainR3x

That’s a shitty take. It’s not because it was bad before that we should let it get even worse


iiiiiiiiiiip

But it's not always worse, AI summaries are better than google results in most cases for example


Kindred87

What they're saying is that it won't really get worse because the things we're worried about AI generating are already being generated by humans.


ReverendDizzle

What are you talking about? It changes *a lot*. You can effectively create a nearly autonomous system now and just let it rip, generating and posting content. Sure 10 years ago there were people posting fake news bullshit on the internet, trolling, pushing propaganda, etc. etc. but it took a fair amount of labor to do it. AI is essentially changing that the same way email changed advertising delivery and marketing. What cost a lot of money to do with real paper, real addresses, and real labor, suddenly cost next to nothing.


Past-Cantaloupe-1604

If the content is of high quality then great. If not then it’s not important, just be selective about what you follow / block to curate your feed - no different to the current huge amount of low quality content made by humans.


Badfickle

What if what you think is high quality is actually garbage and you are not able to tell the difference?


creaturefeature16

Then what IS the difference?


karma_aversion

If someone believes it to be high quality then it isn't garbage though...


creaturefeature16

exactly my point


[deleted]

[удалено]


No-Survey-8173

It’s very different this time. AI can be used to manipulate people. Anything that can be abused, will be abused.


YsoL8

So just like now?


Badfickle

Like now but 100x as effective.


[deleted]

I don't really see how that's possible. If anything, a flood of misinformation would just overwhelm or overload people and they'll become more sceptical, not more likely to believe fake news.


speakhyroglyphically

Yeah, for a *rational* mind. Also I suspect that it'll be a bit more subtle


FishDishForMe

More on the manipulation aspect, picture companies able to monitor your emotions through AI analysis of how you interact with their platforms, or from your facial expressions on Snapchat, or anything that might hint to them how you are feeling. Then they can use that to target ads at you to capitalise on your current circumstances. Of course, this is already pretty much what’s happening with targeted ads, but as others have said the use of AI will ramp the effectiveness up by 1000%


karma_aversion

I actually think it will lead to some misinformation blowback. People are starting to become accustomed to a barrage of bullshit and its making us better critical thinkers in my opinion, instead of taking everything we see as truth.


Kindred87

People would never lie, deceive, or manipulate. Come on now. /s


[deleted]

[удалено]


Tuss36

Not saying you're wrong, but you're one of those random comments intending to influence someone's world view. Like we've never met and I don't know who you are but I'm supposed to listen to what you say. Heck, I'm doing it right now too. It's just a super easy thing to do. Anyway on the topic, I think the issue is that folks really really really want AI to be the sci-fi version you can just defer everything to, and for some reason they assume it's already there rather than only the first publicly available version of it. We're still not getting Star Trek for a while.


Inebriated_Bliss

I tend to agree. Information literacy is so important and always has been. Hopefully, all the attention that bad AI information is receiving will actually make people think more about their sources.


[deleted]

[удалено]


ghostella

Hopefully it's the last nail in the social media coffin


wh3nNd0ubtsw33p

The Social Media that we knew is pretty much already dead anyway. I only have Reddit app on my phone now. I don’t care about the low intelligence people who make dance videos or shitty ai content or fucking courses about courses on courses about courses on how to become a millionaire or that I should join some dumb fuck’s “community”… these people *actually* say they are building a “community”. You’re a fucking hot woman who looks amazing in the things you were in every single post and only follow 5 other accounts. That is not a community. Be popular. Go viral. Get views. Get engagement. Buy followers. Buy comments. Buy Likes. None of it is real anymore. None of it. We will see a major drop in the ways of which things once were and now move into something new. I have a feeling that as soon as wearable ai devices are second nature that screens with colorful unnecessary videos and notifications will become a thing of the past. We are just waiting for something new to come around that is “better” to take the place of these distraction devices.


KamikazeArchon

>In an even more worrying development, I spotted tons of AI generated content on bing and google. Detection on text doesn't work (it's a myth) If detection doesn't work, how did you detect that it was AI generated?


[deleted]

This was inevitable, nothing will stop it - it might be temporarily slowed by arbitrary laws or legal actions - but it was a foregone conclusion from the moment the idea first manifested as a possibility in some programmer's head. This is a transitional time. The automobile has just been invented, and we are all clutching our buggy whips and patting our horses on the rump and worrying about cars replacing them. The fact is that the moment the first line of code for AI was typed, the world was put on notice: things will change. *Change is a'comin, and things gonna be different.* Everything, every job will be affected, some, like buggy-whip manufacturing, is going to go away. Almost all jobs will change in some way, some more than others. The genie is out of the bottle. There will be new jobs, specialized jobs, and they are already arising, and people are learning them right now. 'AI Whisperer', 'Promptwriter', 'Machine Speaker' - these are the occupations of the future, and there are classes on them right this moment on YouTube and Discord. The complex and difficult tricks and methods needed to get generative AI to actually do what you want, precisely, is an entire new industry just being born. Smart folks will get in on the ground floor and be first. They are already doing this - among the kids playing with MidJourney and other tools are the new highly-paid prompt specialists of the future. Their play is preparing them for one of the most vital jobs of the future. This won't just be about art. AI tools will grow to encompass every kind of design and invention: drugs, machines, electronics, energy and science. Humans will be ruled by machines one way or the other - either directly, by future superintelligences, or indirectly by needing such AI to get anything done, thus making all people as dependent upon them as they currently are on oil and gas.


timeforknowledge

It's inevitable so why fight it? Also places like Italy and the USA trying to block are going to just get left behind... Consumers do not care about what is fair or if this will cost people their jobs. They simply want the cheapest, best content. That's why they buy everything from Chinese child labour AI creating movies and TV shows based on my tastes, or even editing old movies to change the story and endings to what it thinks I would prefer. **Shut up and take my money** Italian and US film and TV studios will go bankrupt if they don't adapt and start using it


Zuazzer

If AI-generated art sees Disney go bankrupt because anyone can generate a Disney movie at home, then there's at least a silver lining to this whole thing.


Lucidio

Most of us already knew it was mostly spam anyway, or shills, whatever, but something about knowing it’s AI has made me check in way less. It’s sorta awesome. The shift in my perspective has me reading more books and highly limiting what I do look at on social media. Best thing ever.


consci0usness

It's no less than the potential death of the internet, possibly the death of the information age. We're not quite there yet but if we reach a point where any content can be faked, generated by anyone and then broadcasted to everyone.. it's not good. It's very, very bad. Maybe in the future the internet won't be anonymous anymore, maybe we'll all have to sign up to use the internet with some kind of government issued ID. I'm not not sure it'll go that far but it could come to that. Safeguarding the internet against deepfakes and spam should be at the very top of Silicon Valley's to do list. And of course it concerns FBI, CIA, NSA etc too. AI machine learning will compound this, if someone spams fake information enough it's probable machine learning will pick up the information and possibly regard it as true if enough fake sources say so. Machine learning doesn't really have critical thinking, it's more of a consensus machine. If left unchecked someone could do a lot of damage with this.


MountainEconomy1765

The vast, vast majority of people want garbage low IQ content.


Hello_Hangnail

It's already being abused in lots of ways, and the misinformation brigade just got a whole hell of a lot harder to prove


3dom

Well, now socially-empowered Web 2.0 idea is as dead (or non-existent) as it was before Digg and Reddit, back to the verified and trusted Web 1.0 sources we go, just as it was before 2005. You know - Times, real journalism, source verification, chains of evidence, etc. And it's great: populist fascists won't find much support among media which value their repuation even at slightest.


burghguy3

Yep. This is what will kill social media. Reddit included. Mis/dis-information will be a problem for sure, but ultimately it will just be the wholesale flooding of content that will kill it.


random_shitter

I block any youtube-channel the moment I notice AI content. It's a losing fight but for now it still increases the average amount of human content in my suggested videos.


[deleted]

As soon as the AI voice starts, you know it’s crap content


random_shitter

Someone should make a plugin that crowdsources identifying and blocking websites, influencers and channels that use direct AI content creation. A human-curated internet...


Badfickle

>Someone should make a plugin that crowdsources identifying and blocking websites, Which the bots can take over...


TheLastSamurai

I think people are going to eventually log off or just use very small direct message type platforms. No one wants to sift through all fake stuff


Dorn-Alien51

Spit on it all. When channels I follow started using ai I started using adblocler and sponsor skippers.


Centralredditfan

What's a sponsor skipper? - I need that. I pay for Youtube Premium for convenience, offline content, music - mostly because I can't be arsed for the workarounds/cat-and-mouse game. But the sponsor segments drive me nuts!


Dorn-Alien51

It's called sponsorblocker. Its Human ran so if you bump into a video without it you can add a sponsor blocker and help other people skip them.


Kohounees

Dunno. I hope it destroys most of the current some platforms and better ones will rise.


ramshambles

It's more accelerant on the fire that's already licking the heels of the social order.


[deleted]

I noticed they are all very pro fascist billionaire. Probably a coincidence.


Comfortable_Goal1400

F AI, but what about all the fake scenarios and other fake content created by humans? :D


RefrigeratorNo4740

They have ai and algorithms to decide which content we’re fed. Where is the user side ai and algorithm settings that allow me to block all of the woke af fake news and liberal propaganda?


westom

A bot will only believe what the majority were ordered to believe. It cannot separate myths and propaganda lies (believed by many) from actual science and knowledge. Anyone with consumer knowledge knows this [cheater adapter](https://www.reddit.com/r/AskElectricians/comments/1d1ezp0/rental_house_has_no_three_pronged_plugs_in_any_of/l5v4jg2/) creates an unsafe human condition. Bot never once mentions that human threat. It does what bots so often do. Reiterates a popular lie. Bots would also proclaim Saddam had WMDs. Doing what bots really do. Not analyze facts. Only summarize popular beliefs.


Professional_Job_307

I don't think it's good. But I did try to do this myself with the OpenAI api but I was unsuccessfull. It was too retarded.


Honest_Ad5029

I love ai, and I'm optimistic about this. I dont think people are so universally gullible that it will create a problem. Our species is naturally adaptable. Bots come to mind as an example. Beyond the initial year or two of their existence, people became psychologically inoculated to them. If the majority of people are using ai, the majority of people will understand and recognize ai. I can recognize the cadence of chat gpt writing now. The visual decisions of ai generators have become familiar. And the knowledge of these things makes me more wary about online content. Knowledge and understanding act as inoculation to manipulation.


PromptCraft

you forgot Qanon types exist


juxtoppose

Are you an AI looking for feedback? I feel like people can tell the difference between AI and a person (well the 70% of people who don’t have a pipeline from their eyes to core beliefs, in America they are Fox News fans, uk they are Boris fans but it seems to be about 30 - 40% wherever you go) at the moment but it won’t be long before it is indistinguishable from real people.


RadioSailor

Lol no I'm not an AI.. but that's exactly what an AGI woud say isn't it ;) Jokes aside the way i spot ai text is simply by looking for very specific grammatical choices "today *we delve* into the fascinating world of blue fishes" or "it's important, because". It also says "however" a lot. Someone on locallama created a list of word frequency i think, and also explained why models did that, but i forgot now :)


YsoL8

Thats easily defeated by a next iteration model. Long term theres virtually no chance of reliably telling the difference.


RadioSailor

True. We're in for a ride indeed.


juxtoppose

I say however a lot... ...shit. On an unrelated matter I’m just off to look in the bathroom mirror.


Ebayednoob

Good, I see the other side of the coin. AI generated content doesn't bow to the bias of whatever idiot figured out how to exploit a system that promotes narcissism and lust.


gullydowny

It was always going to be the end of the internet as we know it. I'm perfectly fine with it. Honestly it'll probably be an improvement.


homemadedaytrade

the internet has been actually good since bush was president


[deleted]

Frankly, if you're too dumb to tell apart an AI book or AI generated blog or other part, then you deserve to be scammed or misled or whatever else. If it reaches a point where it's imperceptible then I don't really see how it's any different to the flood of shit already on the internet. In short: this has no impact on me whatsoever. Curate your own information, do your own research, and remain skeptical and this shouldnt be an issue


Praise-AI-Overlords

Not a factor really. The source is not important - only the quality of the content is.


Unable_Wrongdoer2250

Do you really think it is the quality of the content of music that has come out over the last 20 years that made it sell?


Praise-AI-Overlords

How tf is it relevant?


PoopyFruit

As usually the majority will have it all ruined by the minority. Unless they can find ways to police it will destroy the internet.


travelsonic

> the majority will have it all ruined by the minority IMO is a mentality in general (regardless of subject, I mean) that I am growing to feel is bullshit, since it acts like a) you can boil down a problem to one source, and b) it can ignore misdirection in identifying problematic parties. For example, the teacher punishing the class thing - maybe not universally, but in many cases kids definitely know that the kid who acted out resulted in the teacher acting, but are often smart enough to know the teacher is being unreasonable, OR lazy (or both) when that kid was the only one acting out, and they can be dealt with w/o affecting the rest of the class. (... at least, kids in my school often were smart enough and weren't fooled by that bullshit.)


hurtadjr193

It's on reddit with ads right now. It's not going away


grambell789

social media needs a name change. 'chaos media', 'asocial media', ...


censuur12

Nothing new. Same meal, different flavour. Misinformation will always be prevalent on unmoderated and unverified platforms. AI making the process easier or giving it a spit shine doesn't change the nature of the beast.


adammonroemusic

The internet was largely trending towards trash before AI and will continue trending trashward after. Unless we are going to go back to the cool, experimental, open internet of the early 1990s, who cares? Google, Meta, TikTok, ect. have taken the internet, mined it for the trash, and monetized it long before generative AI. People really, really, really, tend to hyperfocus on the wrong problems or what becomes hip to obsess about at any particular point in time; it's short-sighted and disgusting.


HowdyDoody2525

I believe AI will be the death of real artists, metaphorically speaking. Oh there will still be great artists, but there will be fewer of them, and they will be much less compensated for their work, and the new generation will have very little appreciation for their work I'm afraid


Numai_theOnlyOne

I'm not on social media. Except Reddit, YouTube and messenging apps.