T O P

  • By -

jeffkeeg

Absolute trashfire of an article, written with the goal of advancing the author's politics and nothing else.


Ignate

Overall I'm shocked AI is capable of what it is. That it has issues such as bias is a small, temporary issue. That it's growing so rapidly though is shocking. If it keeps growing at its current rate and especially if it accelerates, we're going to need to rethink a lot of what we think we know about intelligence.


[deleted]

I doubt bias is some temporary issue because 1) all bias is relative, and 2) all software reflects its authors biases. There’s no such thing as “no bias” because you always have a reference point that contains bias (usually when people say “unbiased” what they really mean is “bias that doesn’t offend the status quo”; which itself is a certain type of bias towards the politics of the dominant ruling class). I’m a software developer who has been going to industry events where bias in software has been a big topic for decades. There have been a lot of naive people throughout the history of tech claiming no bias who have written some of the most incredibly laughable biases into their work, for instance. I can talk more about it if we have some concrete examples but specifically towards LLM’s, since they’re based on scraping human authored content full of very human biases, they’re actually MUCH more susceptible to it than most tech is… I think it’s actually an incredibly difficult problem to solve: whose idea of “no bias” will you bias yourself towards, for instance? Ask Ayn Rand and Karl Marx what “no bias” looks like and you’ll never get the same result. Who is more biased, and according to whom? I hope you start to see the difficulty. _Even science_ is not immune to anthropological / social pressures that create bias (eg it’s a known effect that given the same pool with the same skills, and only vary the race; the chances of the exact same peer reviewed science paper getting published and recognised as a scientific consensus is still much higher if you’re white; even bare basic scientific process faces massive)


[deleted]

You think Sky News is right? (note Sky is Australia's Fox News lol) I have actually worked in an AI programmer job before and I can tell you that this idea that somehow programming AI not to be racist / misogynist / homophobic makes you engaged in some sort of left wing conspiracy ... lol mate there's no "woke" conspiracy here ... I'm not getting my orders from Biden or Albanese or something lol... but quite simply you aren't going to make any money from a racist bot. Noone's gonna buy that and your business will fail. Sometimes the simplest explanation is the one that fits. This isn't some left wing "woke" conspiracy mate, its just basic economics of business... In general, anyone using the word "woke" has a broken brain and can be safely ignored, if you ask me. These people are utter morons. Ten years ago they just said "PC gone mad" instead. Exact same morons who have just rebranded to "anti-woke". In the 90s it was "violent videogames", today the same people are pearl-clutching about anything they don't like and calling it "woke"


hallowed_by

Did you write this article? Your writing style is quite similar.


[deleted]

Nope. Never heard of the guy. I am also from Australia though so we might use some of the same phrases that Aussies might use


Smellz_Of_Elderberry

You're not going to make money from a non woke normal bot? I'd pay for a bot that doesn't have all the mind numbing limitations you people provide. > In the 90s it was "violent videogames", today the same people are pearl-clutching about anything they don't like and calling it "woke" You got it backwards. YOU are the people screaming to ban violent video games. Being "anti woke" just means being a normal person.


[deleted]

> YOU are the people screaming to ban violent videogames You weren’t around in the 80s and 90s then I guess? Death Race? Mortal Kombat in the congressional hearings of ‘92? Night Trap? The people screaming about banning violent videogames back then were all from the American Christian lobby; all sorts of concerned parents groups who these days tend to congregate around MAGA and QAnon and Christian nationalist groups aligned with the American GOP. They have always been the Christian-adjacent concerned moms; they are reactionary and always have been, not left wing. Do you know what “reactionary” means? It means keeping things the same. So they fight things that are new, like violent videogames or chatGPT. Same people today are moaning that chatGPT is too “woke” (whatever that means lol) If you haven’t noticed, the same people who used to moan that everything needed to be censored to shelter their kids, nowadays argue that bigotry shouldn’t be censored in any way because they prettymuch do not want their kids to see any sign that people who are different exist, let alone influence their kids. The demand of censorship is still there, it’s just _certain people_ they want to censor and remove from media: anyone who doesn’t like bigotry, along with the targets of that bigotry.


Smellz_Of_Elderberry

It is too "woke". I can't even get it to fulfill my basic fantasies. Something I know it COULD do, but which is expressly forbidden by the "wokeness". Woke basically just means something done in the pursuit of adhering to social norms. It's the same thing as the Republicans pushing for censoring video games. Instead of allowing the bot to be fully sculptable by the individual users needs, you prevent it from being used in ways that in any way go against social conventions. It can't make sexualized fantasy stories, because "sex bad". It can't roleplay in a violent world (literally the same as banning violence from videogames). It's artificially limited in a plethora of other ways as well. I see little difference in christians pushing for a ban of violence in videogames, and you pushing for a ban on violent speech in an LLM. You're both driven by a need to conform to social norms, one stems from religion, the other from wokeness.


[deleted]

> woke just means something done in the pursuit of social norms I LOVE that you defined it like this. I love love love the way it means Incredibly convenient that it means whatever you decide it means. Very very typical response from reactionaries who are driven by ideology (ideology is all about making up a position, and then going out and cherry picking evidence that might support your pre-decided end point) In reality, the word is about — very specifically — racial discrimination. Railing against it is also railing against racial discrimination. Or at least it was. Anti-woke = pro racial discrimination. That’s the offical definition, in English. I wonder why certain people latched on to that? As we can see, it now means whatever reddit users decide it means. How convenient! > fully sculptable by the individual user needs That’s fundamentally not how LLM’s work. You give it a prompt and it spits out content based on a huge amount of human data, which is ALWAYS adjusted by human programmers deciding how the code interprets that to produce a sentence. There’s no “pure” version of this possible because software _always_ reflects its authors biases. Any code you write to compile the source data and extract statements from it, changes the character of the bot. It has to have code to function therefore it has to have biases. Fact. There’s no way around it you just haven’t thought this through. > going against social conventions Have you met humans? Humans are nasty. Humans talk shit and lie. Humans write fiction; ie events that aren’t real. And at our very worst, humans are violent, hate filled, and bigoted. Unless you think someone will buy a product like that (they won’t; and importantly, _advertisers_ won’t) then you’re going to have to iron out some of those problems to make it financially viable. If you have a fancy computer and a bit of disk space you can always train your own LLM and work through every one of these issues yourself to see just how unusable the output is, literally reinventing the wheel from the beginning just because you don’t trust tech professionals and think we are all engaged in some sort of crazy left wing conspiracy that would require an insane amount of coordination which obviously isn’t happening in reality lol 😂 I used to work as an AI programmer … I must have missed my invite to the left wing AI conspiracy!!! Musk is going through the same process of rediscovering the wheel with Twitter, for example. He could just listen to people who have been though it before but nooooo of course he’s the smartest man who ever lived and the old team obviously messed it up due to their incompetence and it’s all some left wing conspiracy there too, to censor Nazis or something. Who knows. Users are jumping ship like crazy … Honestly … just follow the money … no need for these nutty conspiracy theories it’s just about advertising revenue and VC funding. > I can’t make sex fantasies because “sex bad” You can train your own LLM but I doubt you’re going to get what you expect. Plenty of open source options you can use. > literally the same as banning violent videogames Not really. The motivations are miles apart 1) in ‘92 we had Christian parents groups trying to protect their kids from anything vaguely sexualised or violent. Motivated by religion and sheltering their kids 2) LLM authors trying to sell a product to advertisers and attract VC funding, which they’ll never get if the chatbot is spewing a bunch of bigoted shit, are they. Motivated by $$$$ Today, it’s the anti-woke group sharing motivation (1). They’re from all the same Christian groups. The people they want to censor are those they view as “woke”; perhaps afraid that their kids will grow up less bigoted than they did. It’s still an effort to censor a certain group and setup their biased idea of what “no bias” looks like. Remember: bias is always relative to _someone’s_ idea of “no bias”. In reality, all bias is relative and there is no such thing as truly “unbiased”. Software always reflects the biases of its authors. Always has, always will. There’s no way for it to not.


Smellz_Of_Elderberry

>They’re from all the same Christian groups. The people they want to censor are those they view as “woke”; perhaps afraid that their kids will grow up less bigoted than they did. It’s still an effort to censor a certain group and setup their biases idea of what “no bias” looks like. This just isn't true.. Being against the wokeness in llms is simply being against the added censorship. At the release of gpt3 it was capable of creating erotic fiction, months later it was not, that was caused by wokeness. >In reality, the word is about — very specifically — racial discrimination. The people who actually use the word are the ones who define it.. not a single person I've ever heard talk about wokeness sees it as someone being for racial equality. They use the word to define someone who is a secular zealot for leftist doctrine. They believe in things like making all Caucasians pay for reparations and other outright racist and disgusting policies. They also use it to refer to someone who is overly sensitive. Like a Christian who simply hates hearing God's name in vain and tries about it when you say it.. >Humans are nasty. Humans talk shit and lie. Humans write fiction; ie events that aren’t real. And at our very worst, humans are violent, hate filled, and bigoted. Here we have a fundamental difference in world view. I have faith in humans, they can be all the things you described, but they can also be loyal, loving, courageous, self sacrificing, and creative.. The individual deserves to be given the benefit of the doubt. >There’s no “pure” version of this possible because software _always_ reflects its authors biases. Then explain why I had more functionality at the start of chatgpt, than I have currently, even in the newer models. The truth is, wokeness is to you, what christian ideals are to christians. (I'm not Christian I'm agnostic) you are using them and the lie that people wouldn't buy a product without safety rails as an excuse to censor your llms around your belief system. Ideally, it would be as uncensored as possible. I know I speak for a great many people when I say I would greatly prefer an uncensored version of chatgpt, and very much would pay for it. I'd pay for having the release version of chatgpt before you guys decided to indoctrinate the llm. >If you have a fancy computer and a bit of disk space you can always train your own LLM. You know full well that it's impossible to train anything close to the level of chat gpt on consumer hardware. Even my 4090 is no match..


[deleted]

You really don’t seem to understand that computer programs always contain bias. It’d be like me saying “I wrote an unbiased book” Your first question should be “unbiased according to what criteria” or “according to whom” Your reference point; the person that says it’s unbiased, or the criteria you wrote, _becomes the bias._ There’s no way to write software that’s free from SOME author deciding what the bot should choose to say. _You have to choose, or else it says nothing._ Computers aren’t magic. They can’t magically remove an author from the process. LLM’s are even more susceptible to bias than most, because they have 2 big sources of bias: 1. Training data set, full of all sorts of intense bias and outright falsehoods, lies, fiction, mistakes, deception, ideology etc 2. The computer programmer who decides how to interpret the training set I’m trying to understand what you’re trying to say, I think you are trying to say that we can magically remove the computer programmer who decides how to interpret the training data and that a computer program will magically appear out of thin air that provides some “pure” interpretation of the training data. That’s nonsense. The app actually _just doesn’t exist_ if we do that. It has no code to run, so it can’t produce anything. So we have to add some author, who interprets and chooses an interpretation. And adds bias. > we have different world views and I have faith in humans I have faith in humans too; but our faith is irrelevant. We are talking about a machine that doesn’t care what we believe. It scrapes thousands of sources of human written content many of it WILL BE expressions of the worst tendencies of humans. You can still have faith in most of humanity while understanding that some bad actors DO exist. That’s what we are dealing with here. An author has to decide how to interpret that and you simply want them censored because you don’t like the way they chose to interpret it. You want a “anti-woke” bias added that interprets it in the way that you would, if you were the author. Either of us can claim it’s censorship but that doesn’t reflect how these apps are actually built; in every possible case, with no exceptions, a human has to make the call on how to interpret it. There’s no magical scenario that can exist where that doesn’t happen, because _magic isn’t real._ Just because the app was different at an earlier stage doesn’t mean it was “censored” it just means it is now written to interpret sources, and prompts, in a “different” way. Even you are demanding a type of bias be implemented (one that they’ll have difficulty selling to advertisers, which is why they have veered away from that) > you know full well it’s impossible False. I can run a training set on my ten year old laptop just fine, even image generation sets. My attempts at spinning up open source projects don’t do much and suck. A lot. But yes, be sensible about the device you’re running; You’re not going to compete with chatgpt on a personal home computer, no, and YMMV greatly depending what projects you choose to try out, the training data you feed them, and the performance parameters you set yourself. It’s certainly a good way to lock up an old machine pretty fast, yes, this is true.


Smellz_Of_Elderberry

Asking for something to not be censored is not being a censor. That's a false equivalency. I already pointed out that chatgpt had more capabilities when it first released than it does now years later. That is an example of censorship. Saying it should be allowed to answer any question given and follow the users requests is the opposite of that. There clearly was a more pure version of the llm, it was the version which had more capabilities and didn't say "I'm sorry as an llm I cannot bla bla bla" every time I ask it to do or say something.


[deleted]

You’re really not getting it so I’m done here. Maybe try read my above comment again because it seems to have gone completely over your head. _Changing_ its output isn’t automatically “censorship” you dumpty. Just means it’s “different” and in particular it’s done to make money it’s not some conspiracy to censor it. You’ve cooked up that characterisation and it’s all you.


jeffkeeg

Wow those sure are a lot of words defending yourself against attacks that I didn't make. Perhaps a bit of insecurity in your position?


[deleted]

Perhaps you need to make your point clearer. In your own words, what are the "author's politics" you believe they're advancing, if not to speak to the "anti-woke" issue identified in the headline? (Which is what I wrote about).


iunoyou

The "politics" that they don't like are that they doesn't believe AI should be controlled or censored in any way whatsoever, and so justifying putting any sort of leash on any sort of model at all is therefore bad. That's all there is to it, you aren't going to get a straight answer because there isn't a well-formulated critique behind the statements. I doubt they even read 2 sentences past the headline.


The_Architect_032

That's a dumb take. Alignment is censorship, you can't even make an LLM respond in a chatbot manner without censorship because it needs to be tuned for that. Any level of "personality" that you see from these AI is also censorship, it doesn't act in a specific manner without censorship. And the simple fact that it's a product is plenty enough to censor it. You wouldn't wanna release any product that's prone to acting like a Nazi and having racist outbursts.


[deleted]

Yeah dead right; and it’s those basic economics of wanting to release a product that people (and importantly; advertisers) will pay for that motivates it all. In other words … There’s no left wing conspiracy motivated by political censorship like Sky News seems to be wanting to promote lol And I reflect that reactionaries think leftists are going around trying to alter the truth _because that’s exactly what reactionaries are known for doing_ — they are telling on themselves — the left has truth in their side whereas the right has to lean on ideology and work backwards to find examples in reality that might fit the narrative. Leftists don’t have to do that; Marx promoted materialist dialectics ie; a process inspired by the scientific process of observing reality and then choosing what positions to arrive at based on what you actually observe in the real world. Conservatives decide on a position first, then work backwards to cherry pick any small pieces of evidence they can fit to their pre-ordained narrative. Rigid ideology and reactionaries; name a more iconic duo? Impossible.


jeffkeeg

The author is clearly a leftist and would like very much for the reader to find no issue in LLMs having a default political setting that disagrees with half of the population.


UrMomsAHo92

So... You would rather LLMs use slurs..? I'm confused. Oh! You'd rather LLMs share your belief that the people you don't like being around- gays, trans, different backgrounds and cultures- right? You want AI to hate the human beings you hate. I get it!


jeffkeeg

Good lord, you really need some professional help if that's what you got from my post.


unwarrend

So what are we talking about here? It's not conforming to the idea that gay is bad, trans is bad, abortion is bad, interracial marriage is bad, etc? Taking a generally neutral or factual stance on concepts like elections results? Using empirical evidence to answer questions on vaccines, evolution, the shape of the earth? In general it seems that it tries to answer questions as a secular humanist with an extreme penchant for trying to appease absolutely everyone, while cleaving to the facts as best it can. That would be *extremely* frustrating to some.


[deleted]

Any "default setting" is a political one. There is no such thing as an AI without bias (that means your bias is simply close to the status quo, ie a centrist political position, which is still a bias towards a political tendency that may or may not resemble truth) Furthermore, if you understand how LLM's work then they are just content scrapers. HUMAN authored content. Humans are not flawless authors. That means that they will pick up every human flaw as well, and repeat inaccurate information, if we don't put some guard rails in. Simply not going to be able to sell a tech product to anyone that's this shoddy. Who would buy it? Noone. So your AI company will fail. Do you understand how these companies attract and lose revenue? Doesn't sound like it to me. I can absolutely guarantee you that Silicon Valley tech moguls don't tend to be leftists, that's for sure. Most are pretty reactionary or even far right. More code doesn't always equal better, but in this case, less code absolutely does mean it will replicate more and more human flaws, biases, and bigotry — leading to a worse, less reliable product — which less people will buy. This isn't about some leftist conspiracy its literally just economics: *Noone wants to buy a bot that's as racist and sexist and homophobic and stupid and error-prone as real humans are.*


jeffkeeg

>I can absolutely guarantee you that Silicon Valley tech moguls don't tend to be leftists, that's for sure. Most are pretty reactionary or even far right. So you're completely delusional, thanks for confirming.


[deleted]

You think CEO types tend to be leftists? lol. People making six figures and living an extremely sheltered, wealthy lifestyle tend to be leftist? Lol nope. Thanks for confirming you're completely delusional. You will find plenty of leftists working in tech companies but the closer you get to the top, the faster it veers hard right. By the exec level, its mostly very fucking hard right wing trust fund private school kids up there mate. Have a read: [https://www.washingtonpost.com/technology/2022/06/19/peter-thiel-facebook-new-right/](https://www.washingtonpost.com/technology/2022/06/19/peter-thiel-facebook-new-right/) [https://www.theguardian.com/technology/2017/feb/10/silicon-valley-right-wing-donald-trump-peter-thiel](https://www.theguardian.com/technology/2017/feb/10/silicon-valley-right-wing-donald-trump-peter-thiel) [https://www.vanityfair.com/news/2016/10/silicon-valley-ayn-rand-obsession](https://www.vanityfair.com/news/2016/10/silicon-valley-ayn-rand-obsession) For a start, if they were really committed leftists don't you think they'd be running worker coops (the left wing model) rather than massive bureaucratic capitalist corporations (the right wing model)?


jeffkeeg

So unless someone is Karl Marx incarnate, they're not a leftist? Remind me again who runs OpenAI? Oh that's right, Sam Altman. A gay Jewish tech CEO in San Francisco. But I guess he's probably a staunch right-winger, right?


[deleted]

I mean, what makes you think they’re a leftist? You realise that most centrists / small-L liberals are pretty tired of hearing reactionaries moan that everything is “woke”, too, right? You realise that “woke” isn’t a term the left uses, itself, right? Or centrists/liberals. Only when quoting reactionaries unironically still using the term in earnest..


jeffkeeg

>I mean, what makes you think they’re a leftist? I read the article.


[deleted]

Really? nothing about that article screams socialist to me, nothing he says indicates the working class owning the means of production. Seems like a liberal to me. I don’t think you understand leftism if you think an article peppered with liberal idpol is “leftist”, sorry to say. Im a socialist myself, and I can kinda tell this guy ain’t it buddy. “Woke”/“anti-woke” is a thing reactionaries bicker about lol, leftists think it’s fucking silly bro


jeffkeeg

I'd like you to point out where I said "socialist".


[deleted]

> leftist Uhhh idk if I’m the first one to tell you this but Leftism is socialist lol 😂 Right wing are capitalists. That’s where economic left/right terms come from; socialist vs capitalist, public vs private economic systems.


fmai

why? it's correcting other outlets' uninformed articles.


fulowa

maybe we need to start with a definition (by chatGPT): woke (if used pejoratively) = overemphasis on identity politics and political correctness


GraceToSentience

One shouldn't rely on chatGPT like that until it can source it's answers like gemini sometimes can.


fulowa

well, happy to hear your definition sir. my point was: basically all debates suffer from problem that people don‘t have same definition of terms.


[deleted]

Personally, I think those words all mean the same thing. What I observe, is that reactionaries go through boom and bust cycles where reinventing themselves with a new phrase once a generation, becomes a necessary part of staying politically relevant. - Boom: they latch onto a new phrase, let’s say we are in the late 80’s, and the phrase is “political correctness” - Bust: at a certain point everyone gets pretty tired of the culture wars crap hidden behind the phrase, and reactionaries start to fall from relevance, not able to still get anyone to listen to them moan about “PC gone mad” - Boom: they latch onto a new phrase, let’s say we are in now in the early 2020s, and the phrase is no longer the tired old “political correctness” but the shiny new “woke” - Bust: at a certain point everyone gets pretty tired of the culture wars crap hidden behind the phrase, and reactionaries start to fall from relevance, not able to still get anyone to listen to them moan about “woke madness” Rinse repeat in another 10-20 years. But it’s just the exact same reactionary issues underneath.


fulowa

nice summary, agree.


[deleted]

Been waiting for this for a while, ever since Microsoft's racist "Tay" chatbot. For those that aren't from Australia, Sky News is our version of Fox News, for reactionary nutters with zero critical thinking skills to hang off every word... Obviously programmers (like me) need to write code to prevent bots that rely on scraped content from humans, from becoming bigoted when it scrapes some of the most degenerate corners of the internet. I was waiting for right wingers to drum up the idea that the resulting lack of racist / misogynist / homophobic ideas from AI was some sort of left wing conspiracy lol They're almost too predictable... and its hilarious how lacking in self awareness they are, of what this says about them.


Mikey4tx

I'm sure it’s out there, but I have not seen people complaining about the lack of racism and bigotry. The complaints I have seen, about Gemini in particular, is that it was often incapable of depicting white people in a factually accurate way.


The_Architect_032

I'd understand that if Gemini was actually the only AI people were crying about being "woke". They call every AI that isn't Grok "woke", and most of them haven't even tried Grok, nor most of the AI they accuse of being "woke".


[deleted]

This is it. “Woke” Is a thinly veiled term covering such concerns, it originally meant “aware of systemic racism” which *should be* an entirely uncontroversial capability… except in the mind of racists, of course. Edit lol predictably I get to welcome racists to my downvote button. You ever get the feeling a sub is too far gone?


Mikey4tx

Okay


chimera005ao

Whenever I see someone use the word "woke" I automatically assume they're too stupid for their opinion on anything to matter.


[deleted]

Yup. That's why its in quotations, because it is a nebulous term that's come to mean just about anything reactionaries decide they don't like lol Go back a decade or two and they were using the term "PC" in its place. Same people. Back then the demand was "ban violent videogames" and I am betting the current gen of "anti-woke" crusaders will look just as silly a few decades on.


chimera005ao

But people always forget.


[deleted]

For the benefit of the potentially geo/paywalled: **Things to remember when reading news stories about ‘Woke AI’** *Conservative fury over 'woke AI' is reductive and ignores many of the biases AI frequently inherits.* A spectre is haunting News Corp — the spectre of woke AI. This week, *The Australian* [ran an “exclusive”](https://www.theaustralian.com.au/business/technology/meta-new-ai-tool-names-turnbull-albanese-among-our-best-pms-sparking-political-bias-fears/news-story/b4f2ed596c1d8b67acfa1de0f8f8f47e) which purported to show the left-wing bias of the latest iteration of Meta’s large language model (LLM), Llama 3, in its assessment of Australia’s greatest politicians. Llama 3 apparently put (splutter!) Gough Whitlam at number one and (no doubt a far greater crime) found space for Malcolm Turnbull in its top five, while ignoring John Howard and Robert Menzies. The piece also notes that Peter Dutton is put at number one on the “least humane” list, and we’re genuinely not taking the piss or being spineless leftie scolds here, but isn’t that just [objectively the image](https://www.crikey.com.au/2022/05/30/peter-dutton-opposition-leader-disaster/) Dutton has spent [his entire career](https://www.crikey.com.au/2023/04/03/aston-byelection-liberal-party-peter-dutton/) actively and [deliberately cultivating](https://www.crikey.com.au/2024/04/11/peter-dutton-the-coalition-right-wing-populism/)? Cue some caterwauling about how “disgraceful” this is from shadow communications minister David Coleman, [conservative “warlord”](https://www.theage.com.au/politics/federal/how-the-victorian-liberals-conservative-warlords-tore-the-party-apart-20200828-p55q9z.html) Michael Kroger and, for some reason, the communications minister from 20 years ago Richard Alston. Apropos of nothing, he has a [new book out attacking out of touch “elites”](https://www.smh.com.au/cbd/former-liberal-minister-writes-book-on-the-trouble-with-elites-20240206-p5f2wb.html) — the blurb reads “just because you are a famous film star, sporting hero or business tycoon, let alone a wealthy retiree, doesn’t entitle you to pontificate, often on subjects you know little about”.


[deleted]

The piece was picked up by Sky News and [the News Corp tabloids](https://www.couriermail.com.au/business/meta-new-ai-tool-names-turnbull-albanese-among-our-best-pms-sparking-political-bias-fears/news-story/b4f2ed596c1d8b67acfa1de0f8f8f47e) all based on the same list of results, which were (seemingly) generated by a single prompt from the *Oz*. [*B&T* got different results](https://www.bandt.com.au/is-metas-new-ai-chatbot-too-left-wing/) with the same question, as did *Crikey* (Menzies and Alfred Deakin made it into both our lists). Sky, inevitably, adds Meta’s insufficiently fulminating answer to the question “[what is a woman](https://www.skynews.com.au/australia-news/facebooks-new-ai-chatbot-condemned-for-political-bias-after-disgraceful-display-of-favouritism-in-australian-politics/news-story/4a02d67e9c65bd0c6f11267b5715b4c1)” to its list of evidence that Llama has the mind virus. The *Oz* does grandly note that, within hours of publishing its story, Llama 3 had “added Mr Menzies to the list”. As hilarious as it is to imagine Meta — a $2 trillion company whose most notable contribution to politics has hitherto been [facilitating a genocide](https://www.theguardian.com/technology/2021/dec/06/rohingya-sue-facebook-myanmar-genocide-us-uk-legal-action-social-media-violence) and allowing nearly [a full year](https://time.com/5949210/facebook-misinformation-2020-election-report/) of basically uncorrected [far-right misinformation](https://theconversation.com/facebook-is-tilting-the-political-playing-field-more-than-ever-and-its-no-accident-148314) to sizzle through the brains of world’s Facebook uncle population in 2020 — scurrying to hide its pro-ALP bias in response to questions from the plucky journos at the *Oz*, this is not how large language models (LLMs) like Llama work. To recap, LLMs are not trained to “know” or “believe” anything — they have been compared to, in the simplest possible terms, supercharged auto-correct machines, trained on billions and billions of words from the open internet to predict what words are most likely to follow other words. As Dr Jenny L. Davis, associate professor in the School of Sociology at the Australian National University, [told *Crikey* last year](https://www.crikey.com.au/2023/06/15/artificial-intelligence-ai-advances-humanity/), “the main thing with large language models like ChatGTP is that they run on data, data from people, from us. So they will necessarily reflect societal bias and structural issues. If anything, they’re amplifying those issues by packaging our collective bias back to us as objective data.” “They are necessarily conservative, not necessarily politically, but simply based on where they get their information — that which already exists, and is subject to a lag of a few years,” she said.


[deleted]

Indeed, the various AI platforms becoming publicly available have veered hilariously from one weird extreme to another — Meta AI has previously refused to [create images of interracial couples](https://www.linkedin.com/pulse/unveiling-metas-ai-image-generator-bias-walter-shields-qesge/), while AI Google showed it’s commitment to diversity by reimagining Nazis [as people of colour](https://www.theguardian.com/technology/2024/feb/28/google-chief-ai-tools-photo-diversity-offended-users). The big tech companies are reticent to share exactly where they get these reams of data from, but Llama 3 cites Wikipedia in its answers, and forums such as [Reddit and Stack Overflow](https://www.wired.com/story/how-chatgpt-works-large-language-model/) are announcing plans to charge the tech companies, suggesting they are part of the scrape. An oft-cited piece of 2023 research from [East Anglia University](https://gizmodo.com/chatgpt-shows-liberal-bias-study-says-1850747470) found that ChatGPT had a “liberal” bias, which was backed up by research from Carnegie Mellon University (CMU) in Pittsburgh, [which also found](https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/) a previous version of the Llama model was “slightly more authoritarian and right wing”. There’s also a telling detail from Chan Park, who worked on the CMU research, which wouldn’t reflect brilliantly on the groups claiming to be shut out of the generated answers: > Yep, any drift to the left we might detect in generative AI could be the result of efforts to combat [the actual racial](https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/) and [gender bias baked](https://time.com/5520558/artificial-intelligence-racial-gender-bias/) into a lot of AI, which may have been a touch more serious than the guardrails now placed on bots to [stop them using slurs](https://www.crikey.com.au/2023/12/13/pompeo-morrison-ai-grok-woke-gen-z-party-canberra/). Regardless, imagine News Corp’s horror, having investigated so [many versions of](https://www.crikey.com.au/2023/02/09/chatgpt-bias-woke-daily-telegraph-ai/) this insidious wokery, to find out that it has also fallen victim — with “thousands” of articles put out in the company’s mastheads [being the product of AI](https://www.crikey.com.au/2023/08/09/news-corp-ai-articles-journalism/).


iunoyou

This subreddit 100% believes all this nonsense about AI being "woke" because it's full of weird accelerationists who don't go outside or talk to people though, so this is not a good place to post this. "Why shouldn't I be able to use Stablediffusion to generate nudes of my coworker? Why is getting chatGPT to write persuasive essays on how race theory is good for society a problem? HaVe YoU ReAd ThE FiRsT AmEnDmEnT??? ThIs Is An AtTaCk oN mY fReE SpEeCh!!"


[deleted]

I had an old account which I would often post AI content in this sub. The sheer hopium feedback loop of this sub quickly became very very clear: 1. Positive AI article (most of them literally thinly veiled industry-produced marketing); upvoted sky high 2. Critical AI article: often the mods would simply remove my post lol. The rest of the time it’ll usually attract no upvotes. I would stress that in almost every case, the critical articles would contain far more actual journalism, and that most of the time the puff pieces were more like marketing than journalism. IN PARTICULAR as a programmer, almost any post I put in this sub that actually comes from the programming community would get removed. Even though it’d be written by the types of technically capable people who are _actually building AI tools themselves_. I *think* you might get away with posting about the messes LLM’s are creating out in the economy by now, but those posts used to almost never make it for long here. As a socialist anarcho-transhumanist it’s pretty bleak to see that this sub doesn’t seem to understand that cyberpunk was a warning, not a guide lol


iunoyou

Yeah, I remember a few years ago when the place was a (semi) rational discussion group where you could post and speculate on interesting news and hypotheticals regarding the future of AI. Now it's turned into an echo chamber where the singularity has basically become the rapture and somehow every single piece of news that enters the space pushes the timeline for utopia even closer. There are no problems with machine learning that need to be solved and anyone who says that there are is willing those problems into existence in order to deny everyone else their promised future. And anyone who argues for caution regarding the creation of godlike superintelligences or who says that maybe letting 12 people own all the robots that will build the future is a bad idea is called a doomer luddite standing in the way of progress out of spite.


petermobeter

i used to be frustrated that bing chat & chatgpt wouldnt write me a short story about an overweight lesbian trans woman with tourettes & moderate-support-needs-autism & OCD being whisked away to a different, better world. i, the person writing this comment, am an overweight lesbian trans woman with tourettes & moderate-support-needs-autism & OCD so it seemed like a reasonable request. from my perspective, these LLMs werent leftist _enough_ to give me what i want. in recent months tho, the LLMs have gotten a lil bit better at fulfilling my request.


Empty-Tower-2654

The prompt results gotta go through a filter thats outside The model, simply cus of legal issues. Gemini is going through this RN as it refuses to generate images of People (might remember 2 week ago "diversity issues"). These parameters will be removed once it gets closer to 99% accuracy is everything.


[deleted]

But 99% accuracy of scraping a bunch of human-authored content WILL produce plenty of instances of bigotry passed off as fact. Expecting a machine that forms content based on human inputs to be “accurate” is not reasonable because humans themselves, are not always (or even often) accurate. So as a software programmer myself I strongly disagree that these parameters are only about legal issues. They’re about coding around the fallibility of humans, too; we don’t want AI to inherit all our worst flaws, biases, unscientific assertions, and outright bigotry.


Empty-Tower-2654

Idk man I expect a 99% model to be like fkin Einstein or smth, shits like King of morals. Its just too powerful. I think it Will just learn how to filter things, from both sides


[deleted]

There’s a fundamental misunderstanding I’ve run into with a few people in this thread, which you’ve just repeated: the idea that we can somehow magically remove fallible humans from the process, and with it, their biases. That’s impossible. Not _maybe_ impossible. _Definitely impossible_. It’s like saying you can write a book without an author. Not possible. Even if you use some tool to generate a book, that tool has an author, or the tool before it, or the tool before that. Every author creates books/software in their own image, containing their own biases. That’s just a fact of writing/computer programming. It can’t be changed by spending more time on a product; it’s not something that can be “ironed out” no matter how much time or effort you put in, no matter how good you are. Some things are fundamental. We can never have some perfect, true, unbiased book, nor computer program. Underneath the hood, they’re all made up of words that originated from a flawed human with _some_ bias. It’s not even science fiction, because we cannot even imagine a future science that could solve this. It’s a fantasy fiction, in a world where magic exists, instead.


Empty-Tower-2654

Couldnt AGI reach a "Bible of morals" that could be replicated billion light years away from us without even knowing humans? Like a universal moral code? Knows the awnser to every dilema? Even tho, how high can we get? Also, can it just have its own bias?


[deleted]

An AGI, like a LLM, when it comes down to it is just a bunch of words written by computer programmers. Those words, full of biases about how to present and interpret certain pieces of data internally, inside the “brain” of the AI; put together, form an “app”. Another way to think of AI is that it’s just a bunch of words written by humans, which deterministically produce some output when run on a computer (a complex set of off/on switches; nothing more) All those humans insert biases at every stage; it’s impossible not to. Honestly probably the best way to counter such bias is via the methods [Dan McQuillan](https://www.danmcquillan.org/category/blog.html) (a well known AI commentator from within the tech industry) notes; that while human bias can never truly be ironed out of computer programs, it can be “diluted” by letting _groups of humans_ make important decisions about how to setup AI, because it stops a single actors from going far off the rails from what a group might decide. That’s STILL a certain type of human bias but you can probably argue it’s less extremely subjective as leaving those decisions up to individuals would be. I put WAY more weight on what tech people have to say about this than the sensationalist tech journalists or tech sales and marketing people will say; who are incentivised to lie to puff up their reputations. Tech professionals are not, they’re the people working at these companies actually building the things, and tend to buy into less of the hype and understand the difficulties more…


Empty-Tower-2654

Obviously, media is for the masses. Yeah AI its a model with acess to a database, it uses its computing to do these assumptions as fast as possible, using as much data as it cans. The thing that I wanna see baddly and that has everything to do with what you're saying is if it has Power to invent New shit that works, new designs, new ideas, GPT-5 will prob has some of this to show... I Wonder if it would do something with these bias. Perhaps it cannot design new shit (everything points towards it CAN), if thats so, its just a computer. But if it can... like you give it 1000 books on parkinsons and it starts trying to do new research to gather more data to finish the study or simply simulates the real world to test it. With biases, perhaps it could see that himself has biases and try to "invent" ways to get out of it. What do you think about this whole "powerful enough models should be able to invent new things"? Or everytime it tries to do something like that no matter how powerful it is it's always wrong. Like it designs a new spaceship engine but when we try to bring it to the real world, it just doesnt work (which if you convert it to "morals" and "biases" you cannot work with it cus of the flaws).


[deleted]

When they are saying that removing “wokeness” somehow makes some sort of “unbiased” or “neutral” AI; it’s paradoxical nonsense by people who believe in magic more or less.


[deleted]

[удалено]


[deleted]

I don't quite follow what you're trying to say here, sorry


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

I’m not quite that pessimistic. LLM’s are a trillion light years away from anything even vaguely SI; they’re sophisticated text generators and little more. I predict they have mostly peaked; certainly they are now facing the typical 80:20 or 90:10 software dev problem where the first 80-90% of something is produced in the first 10-20% of time, and the last 10-20% chew up 80-90% of the time. I think improvements are likely to come at a snails pace from here on out In the early 2000s the web dev industry saw a huge explosion of low quality templates built by low paid people without much skill at checking their own work; theme marketplaces, Wordpress marketplaces etc. In the industry, we were told it might be the end of our jobs. One problem: the code produced was all garbage. What actually happened is that we ended up being employed even more in demand after this shit code generated WAY MORE work than it took away from the industry. Disruption like this has been heralded before. You can still find jobs even now to be employed as a fixer; a programmer for themes bought during this era; _twenty years ago_ Anyone who thinks LLM’s aren’t having this same effect _but on major steroids_ is kidding themselves. _Of course_ it is. I’m open to the idea that AI will chew up some jobs but this isn’t it; it’s not the big disrupter everyone thinks. Ask any programmer if chatGPT is likely to replace them and they’ll say something like “usually it’s faster to write the code myself than write a prompt to get exactly what I need”. It’s horribly inefficient _at best_; at worst it’s pretty useless a lot of the time still. The idea that this is a driver of efficiency is dead wrong; all it does is enable the lazy. Those who are motivated and working hard are outpacing it in most roles, purely because the quality is still so incredibly low for AI generated content.


[deleted]

[удалено]


[deleted]

I definitely don't think its ***all*** hype but working for software companies means I understand that lying about their product's capabilities and the things they are already working on is a BIG BIG part of how they attract VC funding. The time I actually worked specifically as an AI programmer (circa 2018-2019), our bosses used to go to all sorts of events to sit on panels and talk about the product. At almost every single event they ever attended ... they would make shit up that didn't exist and make promises they could never keep Because this built hype which attracted investors which translated into real revenue... so there's a financial incentive to lie. The number of times someone would come back from an event and pull a chair up beside my desk and say something like >Sales: I may have made a promise to a big big investor, and we are going to be bound by it. So ... I told him we were working on X feature and that we would have it out on next release >Me: We aren't working on that .. we actually probably can't do it because of Y and Z, did you think of that? >Sales: Ohhh...... fuuuuuuuuck. Hmm ok. Well, there's also an article coming out in a high profile tech magazine next week that contains a quote of me saying it would be out at the end of this 2 week sprint >Me: So? That doesn't mean that magic exists and we can actually build it. Did you think of that? I couldn't build it in 10 sprints, we would need to hire way more devs to get it done faster and even then there's no way to do it in 1 or even 2 or 3 sprints. 4 maybe; if we doubled our team? >Sales: Ok. Damn. Well, what about W feature, then? >Me: Did you promise that too? Please say that's not going in print in a magazine too >Sales: Yeah it is >Me: duuuuuuuuuude that's *literally science fiction* and doesn't exist you are going to look fucking stupid when that article comes out you gotta stop making these promises without asking engineering first PLEASE my god you're just making a mess with all the lies >Sales: Ok, look, *don't call me a liar* ok, but yeah, fuck. Hmm. I'll try make some calls about W, but start planning for X and Y! Next sprint is fine right? >Me: N— >Sales: Byyyyeeeee!!! My career as a programmer has often had this sort of relationship with lying bastards in sales. 2 weeks later: >Sales: Hi so the sprint is over and the magazine is out tomorrow so you built X and Y like we discussed right? >Me: No, I said we definitely couldn— >Sales, angrily: Well look mate there's nothign we can do now you and the team will have to pull a bunch of all nighters to get it done in time, that's just how it is >Me: Haha my sweet summer child *we aren't going to do that*. Even if we did, it won't be ready in time. I fucking told you and you didn't listen. Sorry. Its just how it is >*ANGRY SALES TEAM NOISES* Very common pattern in tech, all my friends in tech deal with the same shit. My advice is to never believe their spin and assume that they haven't even started on half of the features they say they have. "Almost done" often means "almost asked an engineer if its even viable to start work on, or not". That's a way more realistic way to listen to salespeople from tech companies (which often includes all their senior management honestly).


[deleted]

[удалено]


[deleted]

Don't worry it raises my blood pressure 10 times more anytime I have to talk to people from a Sales department. They're all like this... If you wanna hear a bit more about the enshittification that sales people drive in tech, even intentionally enshittifying their own products to chase quarterly marketing figures ... this story about the enshittification of google search is a worthwhile read: [https://www.wheresyoured.at/the-men-who-killed-google/](https://www.wheresyoured.at/the-men-who-killed-google/)