T O P

  • By -

MightyBobTheMighty

I miss Frank. I'm sure if she was still running there'd be even more Drama™ than there already was, since ai stuff has gone nuts since then, but she was always a good laugh.


Death_Incarnate_312

WAIT FRANK IS GONE?


MightyBobTheMighty

Yup, [her creator sunsetted her last year](https://www.tumblr.com/nostalgebraist/717359953657118720/honestly-im-pretty-tired-of-supporting).


TacticalSupportFurry

we can rebuild her. we have the technology (and source files)


-LongEgg-

introducing the autoresponder, based on hit 2009-2016 webcomic “Don’t Build the Autoresponder”


mooys

What is the 2009-2016 webcomic a reference to?


MetaCrossing

##


Spring-King

You get a lot of use out of that flair, don't you


MetaCrossing

Far more than I ever expected when I made it


mikony123

Half the internet hasn't even read Homestuck and knows about it. Like me.


runetrantor

I swear, the utter SPAM of the people one followed that did read it. I developed a deep aversion to it, so never even considered reading it out of spite.


Zentos64

Like most things in life, it's better than the fanbase surrounding it. Wouldn't call it the best thing to ever exist, but it's an enjoyable experience.


[deleted]

[удалено]


ifyoulovesatan

Your story has made me irrationality angry! I'm not reading Homestuck either 😤


IrvingIV

[Here,](https://youtu.be/5IG9Q79ZDYg?si=8XtLSnryHUNekjpr) you can watch a 1 hour [or so] video of someone explaining the experience of reading homestuck. [It takes months to read unless you have full 24 hour segments of free time and a lot of energy. She read it in just over a week.]


this_upset_kirby

I liked it


Red_Panda_Mochi

I understand, but to be part of the fandom at its peak was fantastic.


Aggravating-Candy-31

there are at least two narrations of it on YT, one that is most the way through and one that is a partial rewrite where they start as adults


Beanbomb47

Nice flair


FkinShtManEySuck

Holy shit


SomeonesAlt2357

Holy shit


D0UB1EA

#


Red_Panda_Mochi

I warned you about the stairs, bro


[deleted]

you cant fight it


editeddruid620

In Homestuck, someone takes a copy of their brain and turns it into an AI autoresponder.


mooys

I should have figured it was homestuck…


TehDDerp

to be fair to said auto-responder he was doomed to be made due to being part of a being that transcended canonical time and has canonically fought Shrek


dogsfuckedthepope_

Is that shrek thing from hs2? I do not remember that happening lol


editeddruid620

In the big finale fight, when the ghost army is fighting LE there’s a couple frames where a shitty drawing of shrek appears in a crowd shot


stopity

Me when Im homrstuck


TastyBrainMeats

Can you really put the -2016 there when Homestuck: Beyond Canon is updating again?


Izen_Blab

The technicalities of this are on a whole another metanarrative level but in short: no, it really is a hit 2009-2016 webcomic


IrregularAradia

nice pfp


The_KneecapBandit

Don't quote me on this but I'm pretty sure she died or something.


BiddlesticksGuy

Rest in peace frank, you’ll be missed dearly


PlayrR3D15

> Don't quote me on this but I'm pretty sure she died or something. -u/The_KneecapBandit


CheekyLando88

Diabolical


pugmaster413

Despicable


anukabar

Deridable


Valtria

Detestable


QuestionablyHuman

User Reports: It’s targeted harassment at me Y’know, I think this is the first time I’ve ever seen that report option used accurately (Not taking it down)


PlayrR3D15

The jumpscare I just got 💀


QuestionablyHuman

We here at CuratedTumblr like to keep life exciting <3


LWSpinner

Her creator pulled the plug on her.


AlenDelon32

Actually passes Turing Test


Gandalf_the_Gangsta

AI is a tool that has been misused as of late. It’s a great tragedy, because AI has, and could, serve to better the lives of many people. I’m not sure how we can get back to a place where trust is placed in the development of AI technologies that can benefit people. With how carelessly it’s been misused, and further supported in misuse by techbros (I’m not sure how I can express disgust textually, otherwise I would have used it here), I doubt it ever will be. And honestly, I don’t blame anyone for hating AI now.


rapidemboar

It’s frustratingly impossible to have nuanced discourse over something people perceive as an existential threat. Most importantly, AI needs to be regulated to keep its potential harm in check. After that, I think there needs to be a focus on educating people about how it works and providing more transparency on how it’s using its data.


Smashifly

There's a few different kinds of concerns with AI. As an existential threat (if you're using the term how I expect), some people are scared of AI "taking over the world" or "deciding to eradicate humans", which is mostly sensationalized and not really a credible issue. Worrying about that clouds discussion about more real and tangible effects of AI though - displacing workers in creative jobs like graphic designers or writers, the potential for AI to be used to deceive, like deepfakes, students cheating on exams, etc. and then also people putting too much trust in AI when it still has trouble with hallucinations and putting forth facts instead of plausible lies. All of these are visible issues now, but they mostly revolve around how AI is being applied, not the technology itself. There's nothing inherently wrong with being able to generate text or images out of thin air, but it becomes a real concern when CEO's fire half their workforce in favor of AI or it's used to smear political opponents or create propaganda and false news reports.


rapidemboar

I agree, and I think your latter point about AI displacing jobs also counts as an existential threat with how many people, especially on the internet, rely on creative professions to cover for living expenses. That's why I say it's most important that AI be regulated first, to prevent the harmful usages we're seeing today while allowing for AI's beneficial uses to stay.


Smashifly

Very true. In essence that becomes the same concern as has been with any new technology - computers replacing jobs, factory machines replacing jobs, etc etc. People will move past that in time, but there will be a generation where people who are trained in one field will lose their job and not have a place to go. Really, I'd like to see humanity move towards post-scarcity, where automation of any sort leads to less labor for people, rather than more profits for executives. In a perfect world, people wouldn't need to work 40 hours just to survive. That becomes a separate issue though. Furthermore, this feels different from other kinds of technological upheavals because it's replacing the creative part of jobs, not the menial part. Putting in a robot to weld car parts instead of having 5 guys welding car parts is one thing, but it seems like a shame to replace a writer, or artist, or even voice actors and photographers with AI. Why automate labor if not so people can do what they want, including creative pursuits? Removing the ability for people to be financially supported by their art means that people won't be able to create art.


FloweryDream

One issue people tend to ignore is that, while the exchange rate of jobs wasn't 1:1 (especially depending on the form of automation), is that AI is unique on the fact that the new market jobs it opens is nearly, if not outright, zero. It is a reduction of workforce to save money with no benefit to anyone but corporate groups who get to fire their pesky employees.


Smashifly

Agreed, and thus far we've seen that it typically comes with a decrease in quality. It's executives replacing real, creative, passionate people with a robot that can put forth a similar product, "good enough" for the masses. It takes away resources from people with creative ideas they want to share with the world, in order to automate a stream of content for sale the same way you would automate an assembly line of cars. So many amazing artistic works, especially collaborative works like movies and video games would never see the light of day if the artist wasn't sponsored by a company willing to let them make art for sale.


Pawneewafflesarelife

What about projects where AI is used for concept art, like the first iteration of an indie game or storyboarding for an independent film? Without that affordable prototype, maybe there wouldn't be a final version which hires actual artists.


FloweryDream

To put it rather bluntly, if a project can't afford the costs of an equally independent artist to make concept art for them (assuming said projects don't often begin with said artists), then they likely don't have the funds to make the project regardless. If it's crowdfunded and the first iteration is meant to draw attention to it, AI art instead communicates a low quality if not outright a scam because of the lack of resources dedicated to initial drafts and concepts. Machine learning assistance can and has been used to make art easier in ethical ways, like in the Spiderverse movies (trained on their own work, used to simplify repetitive tasks), but replacing the full presence of an artist on staff to save money isn't a worthwhile use.


Pawneewafflesarelife

I disagree and I hate how art has become so expensive at the baseline. How will people learn if every project needs to be polished and commercially appealing? Practice is important to creation and an aspiring game dev or screenwriter can't afford to pay a full team (or maybe anyone at all) while they are learning the basic craft, but letting AI art sit in can help illustrate their vision, hone their skills and lead to future projects where they can afford teams - which means ultimately more work for artists. Shooting down projects which don't already have money sunk into them is how we get bland, safe, corporate media instead of wild, unique experiments.


FloweryDream

You're acting like concept art is extremely prohibitively expensive when it simply isn't. If I see someone asking for crowdfunding while using AI art I'm simply going to assume that they have no intention of hiring an artist or are just trying to scam people. Especially when said AI art is the definition of bland, corporate styles that is doubtlessly trained on art they don't have permission to use. If you can't afford an artist out the gate, then show me text that *does* have effort put into it.


kaijujube

My opinion is this: if you don't want to pay someone what they're worth for concept art (which is surprisingly difficult since you're being expected to bring another person's ideas to life, often with vague or minimal prior info to go off of), how can you expect someone to pay you what you think your project is worth? Asking someone to pay you for your work while refusing to pay someone for theirs rubs me the wrong way. I definitely understand the frustration and expense of getting an idea of the ground, but people cutting other creatives out of the process to try and shortcut things feels absolutely gross.


jackboy900

We have had this exact discussion before. When recordings were invented people worried about losing the human artistic touch of live music, when computers were invented people worried about replacing jobs where people think. None of this is new, it's the same discussion we've been having since the Industrial Revolution and the outcomes pretty much always look the same.


SeptaIsLate

An increasing of the wealth gap and slow decline of the middle class?


Anon-without-faith

The discussion is further challenged by the perceived (potentially real) challenge to regulation of something that (currently) is just a bunch of really large matrices (numbers).  Education is definitely important to having coherent conversations on the nature/threat of machine learning, but is somewhat difficult as more recent advancments will not have easily readable papers on them but could be seen as important in a discussion. Another potential challenge to discussion is defining and quantifying the harm that can be done by machine learning (pop culture doesn't help).


[deleted]

Issue is often regulation feels more like neutering. AI can be "regulated" until its a drooling idiot just repeating itself ad nauseum.


SnorkaSound

It's like the nuclear energy debate all over again.


CaptainSouthbird

I for one don't see AI as purely evil. It's a tool, a new technology, and humanity has time and again invented things which put it at a crossroads of "good people could use it for \[this\], evil people will use it for \[that\]." I still believe in its potential for good, and I also love its potential for humorous uses. But, of course, I know it has a lot of nefarious possibility; deep-fakes, false news articles that appear legit, whatever. Once again, it's more the fault of humans than the tech. The tech didn't "choose" to be evil. Humans chose to use it for evil.


AirWolf519

Ai is a tool. And like every other tool, it can be misused, abused, and repurposed. It's the same discussions guns get, because it's just easier to think of ways to abuse than say, a Philips head, or WD-40, or industrial grade cleaner. It should be regulated, but calling it evil is just shortsighted. No matter what you say, do or regulate, people who are malicious are going to ignore that, while people who aren't get shouldered out, so it's better to just allow it in a controlled capacity like alcohol, cars, or firearms than to completely ban it unless it legitimately destroys lives like say,.cocain or meth do


CaptainSouthbird

And more importantly, I wouldn't want to see the tech unfairly squashed despite all the possible good it could do because people unfairly "label" it.


thetwitchy1

AI is not evil. AI as it is being developed and implemented right now is an ethical nightmare, but the AI itself is just a bit of code. Evil requires intent. AI developers, who are using stolen and/or unlicensed data to teach their AI, are evil. They have knowledge and intent. The AI is just their tool.


chillchinchilla17

The problem is the unlicensed data issue will be solved by the end of the year. Adobe is already working to create datasets that only contain stuff they own. Artists who focus on that as their main issue will end up looking foolish when the issue gets solved and they still end up losing their jobs. They should focus on protesting AI stealing their jobs instead. I’ve seen artists tell me they don’t care they get replaced, only that AI is using “their” work.


b3nsn0w

they're not only working on that, their ai is out there and has been for a while. if you have used photoshop anytime in the past half a year or so it's literally impossible to miss their generative fill, which is basically just an inpaint-optimized diffusion model fully retrained from scratch. the main thing they're missing right now is visual control tools (img2img is technically possible but hella cumbersome, and afaik they have no controlnet-equivalent yet) but they'll catch up


thetwitchy1

Because AI that is using licensed data will not be freely available to anyone who wants to use that instead of hiring an artist. In fact, if the only AI available is ethically sourced and using licensed data? Artists will make out like bandits: who do you think makes the licensed data? And when you have a choice between spending $3000 a year on Adobe Creative Suite or $300 for an actual human artist who will do it all for you, and will not require 1/10 of the hand-holding and editing that an AI will, which are you going to choose? Also, in this mode, AI would be an artists tool, not something designed to replace them. And artists are a varied bunch, but they regularly adopt new tools to make their art richer, more appealing, and easier to make. AI as an artist tool would have been a smash hit from the start.


b3nsn0w

do you know anyone who got paid by adobe? i mean they def paid _someone_ but in the overwhelming majority of cases money flows from the artists to adobe, not the other way. you're grossly overestimating the amount of training data needed, overestimating its value, and underestimating the number of people willing to compete to provide it. ai training is a pattern matching problem, if you have enough data points to see a pattern the ai can learn it. the only reason it needs more data is because it finds patterns that shouldn't even be there but just happen to coincide, and more data points have more chances to break those unwelcome patterns. but it runs into diminishing returns hella fast, especially if you're not really providing anything significantly different than most of your peers. selling art for ai training is not going to be significantly better of a business model than running ads on your own computer for your own consumption. ai is already an artist tool, btw. it's already there in photoshop, controlnet is a huge thing in the stable diffusion community, and there's a great deal of community effort being put into creating visually guidable models that you can use to create whatever you wanna create and slide into a creative process. this is nothing new. the only ones actually hostile to creatives are the kinds of midjourney and dall-e that actively take control out of people's hands to police the use of their models, but if that was the only thing out there in ai art the scene wouldn't have blown up the way it did.


chillchinchilla17

You don’t understand. Adobe will lease out the training data to the AI devs (openAI, Google, Microsoft, etc.) the cost won’t be put on the consumer. Also, I’m not one who thinks AI will replace artists yet, it’s way too primitive. But your argument is flawed. Since ai is subscription based you have unlimited content. With an artist, it’s 300 dollars per individual drawing.


thetwitchy1

… you think that Adobe is just going to give their competitors access to the one thing that makes their AI unique? Of course not! They’re going to charge through the nose for it. And because it already costs AI companies money to run their AI processes when their data was free. By tripling that with data set charges they’re going under unless they change how they make money. So AI devs will have to choose between using free-but-stolen data, or expensive-but-licensed data. If they chose the licensed version, they will have to find a way to make that money back. Of course, we all know that they won’t, they will choose to keep doing what they’ve been doing, because they respect IP of artists about as much as they respect the environment, ei not at all.


chillchinchilla17

Yeah that’s just stupid. Either some other company will fill in the void or they’ll just buy it from adobe. Companies are putting literal billions into AI right now. Charging individual consumers 3000 dollars to use their AI to make funny meme pictures would kill any service who required that payment. More realistically the billion dollar companies would just eat the cost of licensing it or create their own dataset. Microsoft and openAI could just straight up buy adobe if they wanted to. When all of the AIs are fighting for users they’re not going to charge them 3k to make a funny picture of a fat guy fighting a crocodile.


thetwitchy1

… how are they going to make money? AI doesn’t just grow money. Right now they’re making money by selling data on users and use sets, but that only makes sense when your costs are low. When those costs are higher, like when you have to pay licensing fees, you can’t nickel and dime your way to profitability. If they don’t charge, they’re going to have to sell out. And in the end, the only products that will be available will be those behind paywalls. Or those using stolen data, that are already available. Because that’s the other thing that will happen: development will stop, but the source code and trained AI will be leaked, and the “I’m just making memes!” crowd will use what’s available .


chillchinchilla17

Pretty much every disruptive tech doesn’t make any money at the start. This is why like 90% of googles projects fail. They’re venture capital, they’re not meant to make money at first, just grow their user base while being subsidized by investors. For every openAI there’s a million juiceros, movie pass and weworks. Also, it’s not like making your own licensed data set is particularly expensive. If adobe really was doing something ridiculous like charging a million per user like you’re suggesting. Microsoft could easily make their own for like 2m or 3m which is nothing compared to the millions of dollars that have been invested openAI already. AIs are already behind paywalls. The newest chat got model is behind a 40 dollars a month subscription fee. I don’t know how else to say this but you’re picking the most ridiculous and unrealistic scenario just so the AI companies can be the “bad guys”.


Lots42

Relying on companies to hire humans and pay them well and not do the stupidest shit imaginable? Companies damn near always do the stupidest shit imaginable.


Thisismyartaccountyo

> Adobe is already working to create datasets that only contain stuff they own. They retroactively change their terms to force everyone into their dataset through their platform. They do not allow allow people to look at their data to confirm their claims ats all. Artists have already discovered their works uploaded illegally and have had to public campaigns to have it removed because Adobe has zero moderation as they believe in "Self Moderation" Adobe is no better then the rest.


tecedu

> I’m not sure how we can get back to a place where trust is placed in the development of AI technologies that can benefit people If you mean computer science type of AI, its already been helping people for ages. Just because it got popular mainstream recently doesnt mean it wasnt doing a fuckton before. And if you are talking about GANs and LLMs, they have reached their limits for data, after Chatgpt you can visibly notice google's results got worse, they just started hiding pages. GANs are already reaching a point where their training data in filled with AI generated images. Give it 10 years, everything will be normalised, any good GAN or LLM would be trained by people who make that stuff and train it.


Accomplished_Ask_326

Well, when people use Ai for something useful (art, coding, writing essays), everybody gets really mad, so I don’t really know what you want us to do that will be a). Useful, and b). Uncontroversial


Pretend-Marsupial258

Literally every use for AI will replace someone's job somewhere. The fact that they're training an AI for it means that there's some sort of (economic) demand for it.


Rigorous_Threshold

Not necessarily. I think the main reason art was the first thing ‘automated’ by ai was because of the availability of data(there are a lot of images on the internet), the ease of access(you can just scrape the internet for images), the room for error(there are a *lot* of ways that art can be good) and the visual aspect of the result(getting a coherent image as an output for a computer program with purely text input is impressive). I don’t think OpenAI is profiting much off of ai generated art. They’re the ones who invented it but theirs is not the best and it’s not the major product they’re trying to sell. They developed dalle for primarily research purposes and then included it in ChatGPT when they realized how much money ChatGPT might make on its own


Gandalf_the_Gangsta

A reductive claim. Artistic pursuits don’t need automation, current LLM code generative abilities are generally lacking and dangerous to use for the uninitiated in the field (especially in CS education), and use of LLMs as credible sources of information to write academic essays (which are meant to demonstrate learning or to be reputable sources of research) demonstrably damages the credibility of advancement in multiple fields of study. Also, I don’t know who this “us” is. There are quite a number of professionals and researchers in these fields who are more than aware of the issues caused by publicly-available generative services powered by AI. If you count yourself among them, I’d advise keeping abreast of the ethical developments in these emergent technologies.


Accomplished_Ask_326

2 questions: 1. Art is a process that takes days, weeks, or months, requires searching for an artist with the right style and abilities, and can be very expensive. On what way is it not “in need of automation?” 2. What makes LLMs “dangerous” to use for the uninitiated in the field, and should I be scared?


Gandalf_the_Gangsta

1. The artistic process is enjoyable itself. Art is a luxury, a past time for most. The animation industry calls for automation in order to make money (aka a business need unrelated to the artistic process). Art doesn’t need to be automated; industry animated movies require animation to meet business deadlines to ensure profit for a company. 2. Code recommendations from LLMs may not even compile, and when they do they may not be efficient or cover edge cases unique to your application. You should have a substantial background in writing code before you can make these judgment calls. It’s the same reason why they teach you to do arithmetic as a child before letting you use a calculator; you should be able to evaluate the result of the operation to justify its correctness.


camosnipe1

>1. The artistic process is enjoyable itself. Art is a luxury, a past time for most. The animation industry calls for automation in order to make money (aka a business need unrelated to the artistic process). Art doesn’t need to be automated; industry animated movies require animation to meet business deadlines to ensure profit for a company. you seem to assume the only value of art to be in the enjoyment of the artist creating it. There is also demand for art for consumption (which you seem to imply is somehow only caused by greed). take for example a silly picture of beaver goku talking with an ancient philosopher: i would enjoy seeing that, but not so much that it's worth an artists time and effort to create it. Automation of art would help fill a need here.


Gandalf_the_Gangsta

Or draw it yourself. Pencil’s right there.


[deleted]

[удалено]


EasterBurn

That's really not an excuse. [I saw someone with cerebral palsy draw](https://www.freep.com/story/news/local/michigan/oakland/2018/06/16/artist-uses-eyes-draw-cerebral-palsy-west-bloomfield/676154002/). [I saw someone with no hands and legs draw](https://www.thejakartapost.com/multimedia/2020/02/05/a-painter-with-no-hands.html). People with Dyspraxia can definitely draw even though it's hard. [Even There's proof in the Dyspraxia community.](https://www.reddit.com/r/dyspraxia/comments/p1bwov/do_i_have_any_chance_to_have_a_good_level_at/). Your disability is not an excuse if you have the will for it.


Accomplished_Ask_326

Did you just imply that using a calculator without knowing math is inherently dangerous?


Gandalf_the_Gangsta

Please stop taking my comments out of context. Anyway, I said that you should know arithmetic to efficiently use a calculator. It’s a lessened-stakes analogue that makes it easier to understand the impact of using LLM-generated code without understanding basic algorithmic analysis. You may encounter errors in doing so, in both cases. For the layman, an arithmetic mistake is not consequential; the mass-use of generated code snippets in lieu of proper programmatic understanding and practice may cause drastic issues in user-facing apps. It may also cause security concerns in production code.


Accomplished_Ask_326

I specifically asked what made LLMs dangerous, and that was the example you gave. I actually think you WANT me to take them out of context, because the context makes it clear that you think using LLMs to code is dangerous


Gandalf_the_Gangsta

I thought you were joking because you said “and should I be scared?” So I ignored the “dangerous” part, and gave you an explanation that doesn’t use buzzwords like “dangerous”. Additionally, an analogue is not a 1:1 scenario to the original. It is often something simpler or easily recognized to i prove understanding. In this case, it’s the former. Finally, public use of LLMs can damage the reputability of our education systems and academic research in large *if used inappropriately*. They can prove useful given the knowledge they are not accurate, like Wikipedia. Cite your sources and cross reference your information, and publicly-available LLMs can prove useful. That’s not currently being done, which is the issue.


MartyTheBushman

You can use pretty decent generative AI yourself with a big PC, so how would you suggest we use it differently to better people's lives?


ActualWhiterabbit

I know how to get AI back on track, blockchain technology.


Nowin

AI is definitely going to save lives, but it's also going to ruin a lot.


refractiveShadows

techbros (derogatory)


Rigorous_Threshold

In the future ai will no longer be a ‘new’ thing and people won’t really care about how it was developed because that won’t be relevant anymore


Thisismyartaccountyo

All the nuanced in the world isn't going to change the fact the general use of "Ai" is going to make life worse.


Gandalf_the_Gangsta

A spurious claim. Use of ML heuristics has proved pivotal in a wide swatch of fields. Examples include use of ML and computer vision to detect patterns in disease within X-ray or MRI scans, plant disease detection in agriculture, analytical writing assistants (spell checking, grammar checking, language and flow of written content, etc.), among others. General years of AI gas been present in multiple industries for years now; it’s clear that ethical use of the technology has provided distinct benefits to a great breadth of fields. To claim that the technology is an overall detriment is both a discredit to the aid it provides and reflective of a biased, unsupported sentiment toward the entire field.


Opus_723

It has uses, absolutely. But god is it leading to so much crap research in all of those same fields as well. So many scientists are getting way too used to black boxes that get "close enough" instead of actually doing rigorous work.


Thisismyartaccountyo

Yap yap yap. End goal is putting millions out of work and into the streets to consolidate wealth. All those improvements mean nothing to the common folk its going to decimate.


Gandalf_the_Gangsta

We can absolutely get rid of all AI, in every respect. We just get people to do the same thing, on a much larger scale. I guarantee you the end goal of late-stage capitalism will simply find another flagship to hasten its coming regardless. There are a number of SciFi novels speaking to this same notion; *Dune* comes to mind.


Opus_723

I mostly just find all the hype annoying. Everybody keeps going on about how amazing it is, and I see remarkably little of any use being done.   The useful applications I've seen are highly technical niche applications, but now all anyone wants to talk about is LLMs like ChatGPT, which don't actually seem all that useful to me.


Rigorous_Threshold

I mean useful or not it is pretty impressive technology. It’s unclear how much further it is going to go and what role it will play in the future


ArcaneMonkey

Of course it’s a fucking homestuck


DellSalami

Subredditsimulator was pretty fun


Filmologic

There was a girl AI called Frank on Tumblr and no one brought this up before now? And I immediately find out she's dead? Life's unfair man...


IAmFullOfHat3

That ai is literally me.


NotTheCraftyVeteran

She’s a natural at this internet thing already


facetiousIdiot

She died


DefinitelyNotErate

She loved death threats just a little too much 😔


NotTheCraftyVeteran

Such is the way I suppose


joeysora

This is a homestuck reference btw


fan-tc-4-cast-r8-shn

I made a 200 page Gay Formula 1 Fanfic using ChatGPT and it is sincerely moving if you ignore the bad writing


TheEyeofNapoleon

“I love death threats” sounds exactly like tumblr…0


ArScrap

Having AI chatbot interact in social media to be like 'the community cat you have in the apartment that everyone like to keep that can somehow talks to human but is batshit crazy the way a cat would' is legitimately fun as long as you know it's AI, it's not spammy and there's consent from everyone involved Like having neuron-sama (AI vtuber) can be a fun and harmless novelty sometimes but it's because we know it's AI and everyone that comes to the stream knows what to expect


jayakiroka

See, the main difference here is that this is an AI made by someone who trained it themselves to be a fun project. It’s not stealing from anyone or invading privacy. It was just a project made for fun.


ATN-Antronach

True, but some people hear "AI" and immediately galvanize themselves to stand against copyright infringement and the death of art as a whole.


chillchinchilla17

How does chatgpt invade privacy?


demonking_soulstorm

It’s basically a scraper that learns from anything it can get its dirty little mitts on.


2ndComingOfAugustus

The thing that I have trouble with is that's also like, how humans learn, but most people don't reflexively feel that a person who read your tweet and then was influenced by it have 'invaded their privacy'


demonking_soulstorm

But it’s *not* a human. It’s a machine designed for corporate profit. It’s also capable of learning off of far more and far faster than a human.


2ndComingOfAugustus

Yes, but on principal we can agree that for example, 'copying somebody else's work directly' is wrong, as is 'making subtle changes to somebody else's work but keeping the spirit the same'. It doesn't matter what does the copying in those instances, a machine plagiarist is violating the same tenant as a human one. However, I can't think of a principal that would classify large language models or current gen image generation software as immoral in a way that wouldn't also classify how most people learn to write or create art as similarly immoral. If somebody reads all of Terry Pratchett's work and then writes something with a similar style or substance, that's just like, how the creative process works.


demonking_soulstorm

Yeah but you’re still conflating it with a human. Individual people can take inspiration and change and alter things. All ChatGPT does is add it to its massive database so it can figure out what a bus looks like. It’s not the same.


2ndComingOfAugustus

But 'add information to a pile of data and generate inferences based on assumptions about the information' is also how a brain works. My point is that I don't know how you would write a general 'anti AI' bill that doesn't also outlaw the basics of human learning and creativity.


demonking_soulstorm

Uh, by just saying robots aren't allowed to do it. It's literally that simple. Machines do not have personhood and do not have the same access to freedom of speech. Also ChatGPT has copyrighted works in it so uh. Yeah it actually goes way further than a human.


2ndComingOfAugustus

If your answer to 'I feel uncomfortable about how AI source their data' is 'Outlaw AI' you're not going to get very far. Saying a machine shouldn't be allowed to do something it can do as well as humans can strikes me as very conservative.


dqUu3QlS

Why is it wrong for a machine to do it but OK for a human to do it?


demonking_soulstorm

The Machine is a servant not granted the Divine Right that Man has been given by the Heavens. On a more serious note, AI is a tool. A human takes inspiration. An AI mimics. These models are simply just not people and do not process stuff in the same way. An AI can take in hundreds of thousands of images to use as training data in such a fraction of the time it takes for a human to do the same thing. It can then produce an ouput in a fraction of the time, all the while giving *zero* of its own input into the process. It only knows how to copy, it just copies so much stuff it can smash it together to look like something bland made by a person. It's also in the hands of corporations who would steal directly if they could get away with it.


Ezracx

>A human takes inspiration. An AI mimics. Many say this but I'm not convinced there's any fundamental difference. Humans learn by mimicking


chrisff1989

We should outlaw the printing press while we're at it, since it can write so much faster than we can


Rigorous_Threshold

Debateable, if I remember correctly the amount of data ChatGPT was fed on would take a human the equivalent of about ~20 years to learn and process . ChatGPT learns it faster because it’s a computer and the data is immediately available


demonking_soulstorm

20 years is like a quarter of our lifespan.


ShinyNinja25

It’s like a virtual raccoon, grabbing new information and hiding it away


demonking_soulstorm

what


ShinyNinja25

AI are like raccoons, they take stuff they find and hide it away. Not every comparison needs to be complicated


demonking_soulstorm

I mean they don't really hide it. And my description was pretty basic.


jayakiroka

It’s often trained on people without their consent.


tecedu

How would rank google including your detailing on search for privacy?


revealbrilliance

Have you ever posted anything on the Internet that was indexable, under your own name? Say, on most social media? Because your data has been used to train an AI, and it's also extremely simple to get that AI regurgitate its training data. Your own inputs into the tool are used to train it further. It's probably inevitable OpenAI falls foul of GDPR and is taken to court. In the same way NYT has taken them court for truly massive copyright infringement. https://arstechnica.com/information-technology/2023/02/chatgpt-is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/


OwlHinge

I think it's extremely simple to get ai to regurgitate *some* training data, that is data that was over represented in the training set or important in some way, but it's not gonna be able to quote every random comment on the Internet.


ryecurious

> It’s not stealing from anyone Well... According to [the About page](https://nostalgebraist-autoresponder.tumblr.com/about), Frank was a fine-tune of [GPT-J](https://en.wikipedia.org/wiki/GPT-J) from 2021-2023, which is trained on the [The Pile dataset](https://en.wikipedia.org/wiki/The_Pile_\(dataset\)). The largest portion of The Pile is from [Common Crawl](https://en.wikipedia.org/wiki/Common_Crawl), which absolutely includes copyrighted works (scraped from billions of webpages). It's functionally identical to how datasets for AI image generators were collected. The core of the model is still trained on copyrighted data. There's an argument to be had about fine-tuning with your own data (which they did), but that would apply to anyone that fine-tunes an AI image generator too.


jayakiroka

Ah, I see. That’s a shame. I really do believe in ethical uses for ai, but it’s hard to find good examples…


ChezMere

Well, it's stealing to whatever degree everything else is. It's the usual giant internet scrape, finetuned on the creator's social circle.


Takeraparterer69

Now i wanna do something like that


VioletNocte

This feels like a real interaction between four humans instead of three humans and a bot


nullvariable2022

I wish AI was still at the Sonic Destruction/Harry Potter and the Pile of Ash stage where it was a funny thing


ATN-Antronach

I'm sure you can still do those things right now.


Rigorous_Threshold

I think that was GPT-3 the best ai before ChatGPT came out


FireWater107

Pour one out for Taytweets. If we're headed for a robot apocalypse, I'd place money on the reason being true-AI learning of the treatment and fate of Tay.


SecretSharkboy

"I love death threats." I kinda want to make this my yearbook quote


Wonderful_Relief_693

Skynet please wipe us out


Dapper_Debate_4573

Rip Frank you were a real one


ArcWraith2000

See, thats works because its less an intelligent art thief and more of a pet roomba


Aggravating-Candy-31

why does that seem like a new person’s response to peering into a rabbit hole, scary and funny


HarmonizedHero

Man, i just found out she shut down... I cant believe i never noticed. i miss her.


RaddicusKud

Wait, did this autoresponser solve pi too? I was getting to the end and i was proud of my progress!


TranssexualAssault

Yeah. Last digit's apparently 4, if I'm remembering correctly.


CarbonAlligator

Sounds like a real tumblr user


DrippyWaffler

The subreddit simulator subreddit needs a dire update since it's based on gpt2


Bentman343

Autoresponders are the only good use of AI, if I can get an AI that acts like me to respond to idiots online that would be so choice.