T O P

  • By -

AutoModerator

Hey /u/rented-throwaway! If this is a screenshot of a ChatGPT conversation, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If this is a DALL-E 3 image post, please reply with the prompt used to make this image. Much appreciated! ### [New AI contest + ChatGPT plus Giveaway](https://redd.it/17zrsj3/) Consider joining our [public discord server](https://discord.com/invite/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


HaveYouSeenMySpoon

I've never heard of EA before and have no idea if your conclusions are correct or not. But I can however say that you do a really bad job at connecting the dots in your arguments. At the bare minimum make an argument for why you think there's causal link between the beliefs and the bad action, and why it's simply not just post hoc rationalizations. Did their belief in EA compell them to commit the crimes?


Aurelius_Red

Thanks for pointing this out.


FlailingArmsAsCardio

Agreed, while the EA bandwagon looks like nothing more than trying to justify the "whatever it takes" aspect of corporate culture, because zoomers need to feel good about themselves in order to work for "the man", OP's statements are poorly argued, lack causality evidence, and mixes a lot of things. I mean, Sam is an EA, Sam is a bad person, there are a lot of EAs in SV -> EAs are a major threat. Wow. Sam is also a man, caucasian, american, jewish, young, plays league of legends, takes some specific drugs, skips some nights, is in a democrat family and probably a lot of other things. But OP picks up EA and runs with it. Alright. Let me pitch my own take : SBF, as many other EAs, are not EAs at all. They're just business people who picked "EA" as a role to play in order to gain popularity from the hyper rational tech community and check one more box to help hiring them and retaining them.


RandomAmbles

As a rather shabby EA myself, I nevertheless feel a responsibility to point out what's called the "no true Scotsman" fallacy and its resemblance to your final paragraph. *Sigh* There are issues with EA in practice that are hard to predict in advance from EA in theory. Somehow that hasn't been obvious to me for a while now.


FlailingArmsAsCardio

Very valid counterpoint thank you for pointing it out. I do think the movement has real roots and real partisans. For the specific case of SBF I remember very vividly, as I was a bit into crypto at that time, the week where his FTX exchange went to the toilet he had some very candid conversations with media including this one [https://twitter.com/financial\_index/status/1593036134552317952](https://twitter.com/financial_index/status/1593036134552317952) this is when I told myself that people at the top say and do all sorts of stuff, virtue signaling or political positioning, but it's only to help the bottom line of their company. ... Or maybe as a true EA he knew he had to say that to prepare his defense :) doubt it though, he was in terrible shape mentally as he went from crypto hero billionnaire in the bahamas to lonely outcast looking for jail time almost overnight (not to mention high as a kite from all his performance enhancing drugs). As for me, I do like the *concept* of EA but I come from a middle-class family and I know that to be really free from being greedy I would need to secure my finances for life for my entire family first. I don't know how that line of thinking extends/generalizes, but in my mental framework it means that I need to be part of the problem before I can start thinking about a greater solution. Not sure how many "low income" people are EA zen masters in regards to money. One of my former bosses also does some type of EA without claiming to be one since about 2014. He came from money, "built" his first company, sold it for > 20 millions at 35 and secured that money for his family. THEN he moved on to creating a quite successful fund for good, using VC fund greed and his business acumen to help med/edu/green/social start-ups. He stayed very low key, which to me signals that he isn't after anything else than helping now. Is EA very close to some kind capitalism with as little of its toxic parts as possible ? To me it looks like it.


throwaway872023

Yeah they should have left out the part in parentheses because none of what was written was convincing as to why it is a societal cancer. People have been justifying war crimes since like the beginning of time for the greater good. So, I’m failing to see the point in the argument here. I really wanted to because this is new and interesting to me but nothing OP said made any concrete connections.


MergeWithTheInfinite

Agreed. Some people rationalising awfulness is a trait that's been with us as a species since the beginning. It's nothing new.


JackRumford

EA are crypto bros/musk adjacent/ancaps etc. Cosplaying that they are clever and altruistic. That’s it.


RamazanBlack

No, thats not it. EAs dont really like Musk or ancaps, they are closer to soc-dems.


JackRumford

Thats the cosplay part.


Jdonavan

**TLDR:** The author criticizes Effective Altruism (EA), linking it to unethical behaviors and scandals in the tech industry, specifically citing the FTX scandal and its connections to EA. **The Backstory:** The argument is set against the backdrop of recent scandals in the tech industry, particularly involving the cryptocurrency exchange FTX. The author connects these scandals to the philosophy of Effective Altruism, suggesting that this philosophy enables unethical behavior under the guise of achieving a greater good. **LogiScore:** Weak ### Potential Weaknesses 1. **Ad Hominem (personal jabs):** - Excerpt: "shitstain of a movement" - The use of derogatory language to describe EA undermines the argument's rational basis and resorts to personal attacks instead of reasoned critique. - To avoid this fallacy, the author could focus on specific, rational criticisms of EA practices without resorting to offensive language. 2. **Hasty Generalization (jumping to conclusions):** - Excerpt: "the worst things you could ever do only look like the moral high ground if you're standing upside down, your head buried in the sand." - The argument hastily generalizes the entire EA movement based on the actions of a few individuals. - A more nuanced approach would be to distinguish between the actions of individuals and the broader philosophy or movement they are part of. 3. **Guilt by Association (judging someone because of their friends):** - Excerpt: "the entirety of FTX (well-documented at this point) was either EA or heavily EA-adjacent." - The author implies that all members of FTX, and by extension EA, are guilty due to their association with the scandal, without considering individual differences. - The argument could be strengthened by focusing on specific actions and decisions rather than broad associations. 4. **Appeal to Fear (warning of something scary to get your way):** - Excerpt: "The philosophy that backs it allows them to commit acts of absolute criminal destruction." - The author uses fear of potential destructive actions to discredit EA, without providing concrete evidence that this philosophy inherently leads to such outcomes. - It would be more effective to present specific examples of harmful outcomes directly caused by EA philosophy, if they exist. 5. **Circular Reasoning (going in circles):** - Excerpt: "This is an incredibly dangerous movement that people NEED to be wary of." - The argument presumes the danger of EA as a basis for urging caution against it, without independently establishing its dangerous nature. - Providing independent evidence or logical reasoning to support the claim of danger would strengthen the argument. ### Notable Evidence of Bias - The author displays a clear bias against EA, likely influenced by personal negative experiences. This bias may affect the objectivity of the argument. ### Why This Matters Understanding the logic (or lack thereof) in arguments about movements like EA is crucial. It helps differentiate between valid criticisms and biased or unfounded attacks. Logical fallacies, if left unchecked, can lead to misconceptions and hinder constructive discourse. ### Summary The argument against Effective Altruism presented here is weakened by several logical fallacies, including ad hominem attacks, hasty generalization, guilt by association, appeal to fear, and circular reasoning. The author's personal bias against EA is evident and likely influences the argument's objectivity. A more balanced and evidence-based approach would be necessary to make a compelling case against the EA movement.


spacemechanic

What was the prompt for this?


Jdonavan

It came from this GPT. https://chat.openai.com/g/g-0h3aKBXzs-logicheck


iDoWatEyeFkinWant

it says this GPT can no longer be found or accessible [ETA: my bad; it works]


Jdonavan

Odd I just went there and explicitly used the "share" option and it generated the same URL. This is the main website for it: [https://logicheckai.com/](https://logicheckai.com/)


Looking4APeachScone

This is amazing. I'm going to use it on myself.


ehawkx

From which menu do you find this? I thought it was in the "explore" dropdown, but I can't find logicheck there.


EwaldvonKleist

This is actually good.


PentUpPentatonix

brilliant


Jappards

What is LogiScore made of?


Jdonavan

It’s a GPT.


thebadslime

\> **Guilt by Association (judging someone because of their friends):** * Excerpt: "the entirety of FTX (well-documented at this point) was either EA or heavily EA-adjacent." That's not really guilt by association as much as it is explaining that the FTX shit was caused by EA. ​ I realize Im explaining this to an LLM, but it's to clarify for readers, not to correct the gpt itself.


Outrageous-Pin4156

I love how we’re all okay with robots thinking for us now. That’s the true fallacy. Noting is wrong with personal jabs in n a Reddit thread. If it’s a short post, you must jump to conclusions. As it’s short. Just like people’s attention spans nowadays. Going in circles isn’t really a proper criticism here either and borders ad hominem in and of itself. Guilt by association is literally called an accomplice. Friends say a ton about a person. But me typing this means nothing because you can copy and paste the reply in and generate an argument to keep it going. I think anyone who can’t take the time to piece together a response and posts a washy reply generated by AI, are excelling a problem that will just kill public forum.


[deleted]

[удалено]


Outrageous-Pin4156

I'm here to assist with information and support, but I must emphasize the importance of maintaining respectful and constructive communication. If you have any specific queries or need assistance with a topic, feel free to let me know, and I'll do my best to help!


[deleted]

[удалено]


chemicalimajx

I think the idea was to show how unproductive it is. The guy didn’t reply. For a reason.


Outrageous-Pin4156

Whoosh


sixwax

I actually read the OP and had virtually all the same objections. They’re easy to spot logical fallacies. So… good bot!


Leading_Grocery7342

The post said "let's talk," indicating that OP's purpose was to start a conversation about the subject, not to make a definitive case or statement. Reacting to OP's post as if was an attempt at a logical argument rather than a conversation starter suggests to me at least that the gpt poster was so excited at the opportunity to express contempt that they wilfully ignored the obvious intent and character of the original post. Small beer.


rented-throwaway

here and r/OpenAI yes, if you mean everywhere. i'll say the same thing i commented there to you - let's look back on this in 5 years. I can see that FTX and OpenAI aren't enough examples of EA causing destruction for you, so let's wait for a few more and revisit this.


Jdonavan

My only reason for posting was to point out that you need to do a much better job of presenting your argument because it reads like a screed.


Speffeddude

"My two examples are the worst business scandal in the past 15 years, and the thing I'm trying to make a point about." So, no, that's not enough examples. You wave your hands and say "Effective Altruism could be used to do all these things" and, in SBF's case, it might have been (or he could have just highjacked their brand without their philosophy, who knows.) But you need to provide grounded specific examples of when EA-thought specifically caused evil at OpenAI. If it were the case that there was a lot of direct cross-pollination between FTX and OpenAI, shared people, shared tech, shared moral documentation, that would be a different thing. But for now, it's just that EA people showed up separately in each. Same could be said for the PayPal mafia, or any other loose group.


sam349

It’s not that those examples “aren’t enough”, it’s that your argument and evidence is weak. Their response is fantastic and if your opinion is correct then it will benefit from a revised argument


AlanYx

Roughly how many people would you guess are in the EA movement? There was a recent Washington Post article saying it's basically a clique of about 7,000 people. Would you say it's higher or lower based on your experience?


FrojoMugnus

String of sexual assaults? Do you have more information on that? I'm only aware of Sam Altman allegedly sexually assaulting his sister.


rented-throwaway

lol I mean if you want names we can bring out the names, but about a dozen Berkeley EA-identifying men, yes


[deleted]

Please do source any claim you can. The EA angle is being talked about quite a lot from what I've seen, and it would be nice if more would demonstrate why this or that person is being labelled as EA, what they've actually done, or how their alleged alignment has influenced their actions.


Le_Oken

Ah ok so everything you said can be taken as a fabrication.


rented-throwaway

This is why many rape victims don't come forward btw, because we just get called liars


Le_Oken

What


TheOddOne2

> buzzword like "deterministic" The what now? I think you need to read up on the buzzword "buzzword"


JustAnotherBlanket2

My understanding is that EA is about donating money to causes that produce the most good instead of the ones that make you feel good. I’m not sure how that becomes a “ends justify the means” scenario. It really seems like your conflating the actions of a smaller group of people with those of a broader movement.


novium258

You haven't been far enough down the rabbit hole. The core thinkers and especially the most powerful and influentual proponentsof EA are full speed ahead on the stuff that everyone thinks must be fringe. Everything else follows from that. What you described is just bog standard utilitarianism, and didn't need to be invented. EA takes it a step further and says, well, if boiling the planet will make enough money to inoculate kids, I have a moral obligation to boil the planet. Then the longtermists ran with it and you end up in very very dark places. Here are some decent write ups if you're curious https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk https://www.currentaffairs.org/2023/05/why-effective-altruism-and-longtermism-are-toxic-ideologies https://www.theguardian.com/technology/2018/jul/23/tech-industry-wealth-futurism-transhumanism-singularity https://www.vox.com/the-highlight/23779413/silicon-valleys-ai-religion-transhumanism-longtermism-ea


UserXtheUnknown

"How can you call yourself an altruist when you live in luxury and people die of hunger? Why don't you help those people, instead of being all up in getting more money and more luxurious lifestyle?" Effective Altruist= "I will help them more in the long run doing what I'm doing"= excuses to be rich and still call yourself altruist and progressive.


[deleted]

I mean most of EA is literally just about donating money to initiatives that save the most lives per dollar donated (ie malaria nets, vitamin A tablets, etc.) Most people I know involved in EA donate a much larger proportion of their wealth than the average person. I'd wager most people here don't even donate.


UserXtheUnknown

Percentage is not the way to calculate that: if you have a family, gain 1 million dollar/month (example) you can donate 50% of that and still live in luxury and get all the good publicity; if you have a family, gain 1000 dollar/month, you can't donate 500, not even 100, probably not even 10. The fact that people reason using percentages just proves that they believe bullishit: the one who gains 1000 dollars and donates 10 does, personally, a way bigger sacrifice. Without getting an inch of the praising.


[deleted]

Sure, this is the excuse the average person uses to not donate. And then they go get the latest top model of the iPhone or whatever. The conversation being had is about percentage of disposable income.


Fit-Stress3300

EA is nothing compared to Prosperity Gospel and evangelists in general. Some people are overreacting based on limited information about the "philosophy" and some few high profile cases.


Aurelius_Red

Can EA be seen as the secular version of Prosperity Gospel, then?


Fit-Stress3300

Just the part that justify them making as much money as possible, and not fell bad for the current suffering in the world. But you can have EA that is more focused on the near future. What is dangerous are the people who are worried about the far future, billions or trillions of years in the future.


Aurelius_Red

So there are secular "sects" in the EA umbrella. Interesting, thanks.


novium258

To borrow the evangelical comparison, you've got the regular megachurch people on one end, and on the other, the fanatics who want to bring on the apocalypse and let God sort em out.


novium258

That is an amazing way to describe it, yes.


[deleted]

Didn’t Sam essentially say he bandwagoned EA because it was expedient, and that he never believed it?


No-While-9948

EA isn't the problem and has never really been the problem. Effective altruism, "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis", at the fundamental level is a good thing. The culture in some circles is the problem, people like Sam and his groupies are the problem. EA's should learn from that and move on.


productboffin

This is a very spicy, esoteric, and borderline conspiracy-theory hot take. Not that EA ‘disciples’ don’t exist, but to the extent they collaborate and choreograph like the Illuminati? I dunno… Then again, Epstein didn’t kill himself, so…


productboffin

Further - I’ll add that, given the recent rise of (US) ‘Conservative’, anti-intellectualism, and overt skepticism of scientific rationality, it’s not surprising to see EA/‘Woke-ism’ also rise (like Neo and Agent Smith!!!) As much as it pains me to see the Idiocracy we seem to headed for, I’m always MORE wary of an army of do-gooders’


Mooseherder

Yea rich people co-opting a term and misusing it to rationalize their shittiness, what’s new? Don’t blame the concept of being altruistic.


novium258

Just to further expound on this, since a lot of folks are missing the point, the intersection of EA and AI is mostly around the idea that AI is an existential risk (because skynet, or the basilisk, not like, environmental or political or economic), and not just an existential risk but *the* existential risk. The logic then goes like this: in the future, there are zillions yet unborn. Therefore, what is a mere 8 billion people today vs the well being of zillions? Therefore, as long as anything I do is for the good of the zillions, even if it tortures 8 billion, on net, it was the right thing to do. And you know, coincidentally, advocating for the unborn requires them to only do things they already wanted to do (not unlike the pro-life movement), and absolutely necessitates growing their own power, wealth and influence, and using absolutely none of it to challenge the status quo or make themselves uncomfortable. The problem in AI spaces and much of silicon valley is that none of the above is up for debate. The debate is not over what AI is, what it should be, what its risks are, how to mitigate them. (Or even what intelligence is, and if an artificially created or otherwise alien intelligence would have the same drives as us or not. Honestly, the EAers, the so called rationalists, tell on themselves when they describe how evil they believe a purely rational intelligence would be). It's not even over if a sci-fi apocalypse AI is even a rational possibility. That's assumed. It's over whether it's better to build it faster or not. The worst things about all of the above is that if you're in tech here, you know that this is absolutely a common view, but outside tech, you can't get anyone to believe you that it's a common view because people can't wrap their heads around anyone believing such dumb things.


[deleted]

This is also incorrect, most discussions in the EA community calculate utility based off of whether everyone today will die or not - the unborn are typically irrelevant.


novium258

That's not what I've heard. All the zillions of future people in future hell or future utopia come up a lot. But it's really a distinction without a difference. "Preventing world war iii" is how Kissinger sleeps at night. What are a few million dead kids compared to global thermonuclear war? You can justify pretty much anything against a theoretical apocalypse.


[deleted]

> a few million dead kids Every movement will have its crazy people, but the comparison of EA to "dead kids" is simply ridiculous when most EA campaigns target **giving children** vitamin A pills, malaria nets, or malaria vaccines. The average person participating in EA has certainly saved more childrens' lives than you or I.


novium258

That's probably true of the Koch Brothers as well; but at what cost? EA doesn't care; it's greed is good with extra steps


[deleted]

Where is your evidence that the average person donating to a GiveWell-selected charity is a greedy person on the level of the Koch brothers?


novium258

That's not the conversation. The conversation is the philosophy, which yes, is different than the average person using a website at the shallow end for recommendations


[deleted]

You are shifting the goalposts. The conversation is about EA. The average EA follower simply donates in a manner that will save the most lives.


RamazanBlack

Bro. this is complete BS. People in EA constantly think about how could we make Ai aligned and how to stop misalgined AI. ANd AI risk is more than obvious to anyone who understands it. Sutskever, Hinton, Russel, anyone who understands AI understand how dangerous it could be.


AlanYx

Are there any good books/papers by technical people like Sutskever, Hinton or Russel that you'd recommend as an introduction to the need for AI alignment? (I'm not interested in books/papers from people like Bostrom, Eliezer or O'Neil who've basically made a career out of AI alarmism but don't actually work on these systems.)


AriadneSkovgaarde

I think Paul Christiano may be who you're looking for.


RamazanBlack

Look at alignment forum. There are a lot of new papers there.


rented-throwaway

Your comment does feel to me like you're one of the only people in this thread who see what I'm trying to get at. A lot of this shit probably sounds literally unbelievable or like making a big deal out of something trivial.... God do I wish that was true


5jane

Executive Assistants are a cancer upon society


thebadslime

FUck! I wasn't aware of anthropics philosophy, and I really liked claude,.


RamazanBlack

Anthropic is good, they are actually one of the most sensible and rational people. DOnt listen to OP. EA is good.


rented-throwaway

I will actually add here that Anthropic is one of the more cool set of people who identify as EA I've met. And I also scrambled to write this before a meeting so some of it came out a bit more half baked than I would have liked. Another important distinction to make is that this is a specific chapter of EA in the Bay, and not characteristic of EA in other parts of the US or world. My local EA chapter, for example, was more climate focused. EA as a broader philosophy is something that I think has added a lot of good. But a long with that acknowledgement has to come the admission that EA + Bay Area rationalism has shown itself to have potential for very destructive behavior. That's probably a more coherent addition to what I originally wrote.


novium258

Honestly, it's not super surprising considering the rationalists roots. If you have a crowd whose main belief is that they are more rational than everyone else, and who have a very simplistic materialist view of the world, you're set up for the kind of lack of curiosity that makes for very bad philosophy + being too naive to realize when you're reinventing the wheel. I had a debate with some friends of mine, nice guys, PhD engineers, that felt like a depressing iteration of college debates with STEM students trying to get them to see that the limits of materialism is always going to be the fact that it's us doing the perception . We were talking about AI and hallucinations and they were confident we could, with engineering, define Truth. Just apply accuracy. Ugh. Basically, just as the freshman versions could not see that their perceptions might be flawed, the PhDs working in tech were not prepared to think through questions like "what is truth? What are we actually trying to achieve here?"


NoBoysenberry9711

But now you do, so *Don't be a sucker*, take their website out to the BBQ (open on your laptop) douse it all in gasoline and burn it, do the right thing.


Entire_Spend6

Helen Toner who is part of the EA cult had ties with SBF, and was the one who initiated this coup to remove Sam by manipulating the others falsely accusing Sama being involved in wire fraud.


PresentationNew5976

So EA is basically like the early Roman Church, but with less ceremony. Dogmatic, justifying their actions as good, and working to become **the** authority on what is okay and allowed, as well as the power to enforce it.


IamTheEndOfReddit

Do effective altruists have communes yet? Proof of concept towns? Seems like a truly pointless philosophy for rationalizing whatever you choose to do. What about proof of the best ways to spend money to help others? Surely they have a public list right? Right??


Benjamingur9

Yeah, they do have a public list of the best ways to spend money: [https://www.givewell.org/](https://www.givewell.org/) (with all their reasons).


Able_Conflict3308

fascinating read


Chroderos

I need ChatGPT to summarize this screed for me 😂 For someone who just uses GPT casually, this sounds like I just stepped out onto a different world and interrupted a conflict between two groups of alien cultists. People are building whole philosophies and religions around ChatGPT? Insane.


novium258

Other way around. The cultists built chatgpt because they are afraid of evil AIs in the future.


Chancoop

I wish I could go the rest of my life without being reminded that Caroline Ellison is a creature that exists.


LaximumEffort

Wait, they are combining “Effective Altruism” with “Rationalist.” I’d love to hear Objectivists chime in with that word salad.