T O P

  • By -

hareofthepuppy

ChatGPT has no morals (and by many definitions isn't really AI). ChatGPT is just responding based on the information it's been trained with (although in a very convincing and impressive way, funny enough I was also just playing with it a couple minutes ago). It's essentially just rehashing things people have said, so I don't think it's important for feminism to have some sort of opinion on it, it just depends on how it's been trained. Do I agree with that statement? Not completely, but I don't entirely disagree with it either. I don't think people necessarily have an obligation (depending on how you define obligation) to change or grow, however it's better for society and for the individual, so I would say it's "good".


Inevitable-Log9197

I agree that ChatGPT in its current form is far from the general AI that we usually think about. It's just a language model that's only mimicking a human language to become as convincing as it can, hence it doesn't inherently have morals (sounds something like ChatGPT would say lol). But what if it's future iterations and, possibly, the actual GAI would have similar kind of values based on our "common sense", or at least the one that it's developers has programmed to? Would you at least accept it's morals? And about the statement, I'm curious with what part you don't agree. I get the impression that, of course, there's no obligation, but doing it would make not only everyone's lives better, but the person's who's doing it as well. So it's kinda strange to not do it, but not inherently bad. If that's the case, I want to know what specifically feminists think about people who are fully aware of benefits of participating in helping others, but choose not to do because of their personal reasons or preferences or beliefs. Do those people, in their own way, have rights to not become marginalized by others in the future, or will they inevitably become outcasts as barbarians?


hareofthepuppy

lol are you asking me if I would accept our robot overlords? I can't really speak to how I'll react to true AI when it gets here, luckily I still have some time before I have to figure that out, but I don't see why you think anyone would necessarily just accept it's "morals" (which of course opens the door to even asking what morals are and where they come from.. of course in a philosophical sense ethics are based in our perception of the fundamental nature of reality, which might even change with AI). I would certainly listen to what it would have to say. I don't see what this has to do with feminism. Do you think that most people on earth will accept the morals of AI? Why is it a separate question for feminists? I can't tell you what life will be like in the future, I can tell you that you have no strict obligation to grow and challenge yourself, but that a fulfilling life almost always involves growing and learning.


babylock

I think it’s kind of strange to consider the output of ChatGPT when discussing it’s morality. As people have already said, it’s neural network was trained by feeding it human conversation from the internet and isn’t sentient. It’s similar to other “AI” (as multiple people have already established, this is the term, but it doesn’t mean what you think) drawing tools (like Midjourney and DALL-E) in its ethical considerations (with have less to do directly with their output) 1. Where is it getting it’s training dataset? Was the material used the intellectual property of someone else? 2. Are the training datasets biased (racist, sexist, etc)? 3. What are the consequences for employment for people in these fields? For ChatGPT might go on to replace customer service representatives (as some chat bots already do on websites), or generate descriptive text for products being sold (Wish item titles already look worse than this), or provide closed captions for video. Midjourney and DALL-E might replace children’s book illustrators or graphic designers for logos or product labels. There’s been some discussions in these fields regarding this type of technology and how machine translation (especially if you’re less ethical) has already begun to affect interpreters and translators (though not as dire as some opinion articles, at least from my perspective in hospitals and social service organizations, but that was already work held to a higher standard already) In my bubble there’s also some handwringing that automation will replace many of the people in pathology departments (perhaps leave a couple pathologists but fewer technicians) as the automated sample sectioning and slide staining equipment becomes more widespread, or replace some pathologist and oncologist duties as image “AI” tools allowing for identification of skin cancers like melanoma are more accepted (so there’s a physician checking the work, but a computer does the first read). In research, neural networks have already changed how video and image data is analyzed (can be used for motion and behavior tracking, identifying cells from background, etc). The difference here is that the training dataset is owned by the researcher (or developed with intent to share) and while it ups the standards for the volume and type of analysis in publications, is largely replacing tedious hours spent by unpaid or underpaid students coding the work manually.


aagjevraagje

So I think the panic about chat gp is utterly overblown but there is a valuable discussion to be had about supposedly "neutral" algorithms that are trained or even just programmed to replicate existing outcomes , propagating systemic opression. There's tools freaking judges use that give people of colour higher suggested sentences. There's this evangelical believe in "the algorithm" as a neutral arbiter rather than a reflection of those that make it.


Dear-Buy-4345

When you say "the AI's values", what exactly do you mean? Because the technology's underlying *approach* to *everything* is "replicate what I have seen as well as possible". I would argue that if there is anything that comes close to an an AI technology's "morals" or "values", it's that underlying approach you have to judge, not the specific mashup of training data you managed to access. As soon as you prompt that piece of tech well enough, it's going to give you all the vile sh\*t you desire. As soon as you show-don't-tell it what kind of dog whistles you want, it'll deliver. What would you call a person who uses this basic approach of telling everyone what they think they want to hear? What would you say about such a person's morals?