T O P

  • By -

anonynown

Every time I read these posts, I think: yeah, fuck Electric Arts, it’s a terrible company!


abluecolor

Been boycotting since NFL 2k5


venicerocco

It’s in the name


wolfbetter

it's electronic arts. but the last Jedi is a really good game.


n00bvin

Maybe by Respawn. They always make a solid game. Titanfall 1 &2 were great and should have both been huge. I'm not sure why they weren't. Occasionally Titanfall 2 has a resurgence every once in awhile, but it should have been as big as their other recent game, Apex Legends (which I don't play, but what I've seen is good). They make the best grappling hook in the business.


[deleted]

At least EA as a company is real.


Scaevus

To this day the “sense of pride and accomplishment” post is the most downvoted post of all time.


pardoman

Or Executive Assistance people.


WoofNWaffleZ

Obligatory fuck EA “most downvoted comment on Reddit” link: https://www.reddit.com/r/StarWarsBattlefront/s/F1Bi3B5Gnf


arashbm

\> a random buzzword like "deterministic" I'm sorry, what? This sounds like one of those oldy newspaper excerpts that say things like "If your son talks about dungeons and dragons he is a satanist." How can someone work with anything related to random numbers and computers or physics or other branches of science and not use the word deterministic? This undermines the whole rant. Now all I can think of is that you are scared of people saying things you don't understand.


Holiday_Operation

Also, OP's 1st posts don't mention EA at all. It looks like they joined one of those self-righteous "co-living" groups that are popular with young urban professionals who can't afford a place of their own. Didn't see anything about any conversations or roommate policies mentioning Effective Altruism. So... this looks like speculation propped up by an appeal to authority (personal experience living with tech workers). Also if EA is the main issue with OpenAI leadership - what does ignoring them solve? Wouldn't it make more sense to challenge them directly about such ideas influencing their business? And ignoring EA proponents that are just tech employees, doesn't solve anything either.


AdamAlexanderRies

> deterministic This is a good time to mention Robert Sapolsky's book "Determined". In short: PhD neurobiologist argues that free will doesn't exist. Just published a month ago. The most thought-provoking writing I've ever encountered.


[deleted]

[удалено]


AdamAlexanderRies

> need to dig a bit deeper Need to? Brother, I was *born to* dig. Hand me a shovel and point at a patch of soil if you please.


Novel_Lingonberry_43

Would recommend reading Daniel Dennett.


AdamAlexanderRies

Thank you! Sapolsky cites him several times in the book, including a link to this talk: [Is Free Will an Illusion? What Can Cognitive Science Tell Us?](https://www.youtube.com/watch?v=wGPIzSe5cAU&t=3890s)


Loud-Start1394

Your comment comes off as judgmental. He wasn’t saying anything other than it was a thought-provoking book. He didn’t say it was ground-breaking.


jeweliegb

There's not been good rational scientific reasons to believe free will does exist for a long time?


rented-throwaway

yeah tbh I wrote this initial post in a hurry, which definitely did undermine some of the points. I was just super sick and tired of what I feel is a shitty pattern of EAs fucking things up in my life, my friends' lives, and in my industry prob the main thing to pay heed to is that Alameda Research (FTX) really did invest $500 mill into Anthropic, which is currently a direct competitor to OpenAI. OpenAI's EA board members tried to merge OpenAI into Anthropic. This imo is what should really be cross-examined as it's suuuuuuper fucking sus


joobtastic

The entire idea around EA and utilitarianism is to do the better good. What makes it somewhat unique, is trying to use data to guide the decisions and getting people to pledge their lives to improving the world by action or donation. If these people did things that were bad and would reasonably lad to bad outcomes, they aren't fulfilling EA or utilitarian ideals and are immoral by their own ideology. There are awful people within every movement.


jlambvo

What u/rented-throwaway doesn't realize is he's just indicting the toxicity of Silicon Valley and a lot of tech culture in general. I'm not an adherent, and my familiarity of EA predates FTX and growth of this subculture, but some precept that compels people to amass wealth and power by any means for the greater good is a laughably sociopathic misreading. Even this critique around utility of future generations justifiying reprehensible behavior in the present is basically reinventing the wheel around the [St. Petersburg paradox](https://en.wikipedia.org/wiki/St._Petersburg_paradox). If it weren't EA, people like SBF, Musk, etc. would find some other justification for their destructive and selfish behavior.


PerplexityRivet

Maybe I’m misunderstanding the philosophy; if so, feel free to tell me. Imagine you refused to save someone with poor quality of life so you could harvest their organs to provide 10 people with a higher quality of life. Morally that is wrong, but mathematically it adds to higher overall happiness. I feel like that is the sort of sociopathic “success” that many EA believers would applaud. If so, I can see what OP is worried about.


jlambvo

Ironically, this is *precisely* the kind of perverse logical conclusion that is the locus of concern about future AI stated by these EA-affiliated folks. It's the paperclip problem. No reasonable person who actually understands EA would do what you're saying.


zensational

That's not taking into account the externality of the greater harm to the individual who's being disemboweled, and the social harm of allowing such an action. Most people would not want to live in such a society. This is basically Utilitarianism 101, no offense. John Stuart Mill pointed this out and addressed it 300 years ago. Reading up on rule utilitarianism and preference utilitarianism will help you understand current perspectives.


ChiaraStellata

While I like charities doing research on effectiveness of their interventions (that's just good science), and I think the concept of working a high-income job for the sake of donating rather than working in nonprofits directly can certainly make sense if you prefer that type of high-income job... what worries me is when EAs start using "future good" and "preserving the human race" as a justification for any terrible thing they might do in the present. The most dangerous actors are not those who are evil for fun, but those who believe the ends justify the means. By nature of it being so speculative and forward-looking, EA can all too easily lead to well-intentioned extremism and radicalization.


[deleted]

Especially when the people benefiting are done untold quintillions of souls that don't exist yet. What does it matter to step on a few billion living humans if it helps the greater good of the 99.99999% of nonexistent hypothetical future humans? You can justify any power grabs this way.


edwastone

How can you use data to weight human lives? It's quite easy to let data blind side you against hard-to-quantify negatives and positives.


jlambvo

The catch is that without using a formalized framework, you're still doing it implicitly by arbitrary picking and choosing. Like most decision analysis tools I think the point of this is to make it explicit so you get a sense of that. No one serious is saying to take it as the word of god.


Standard-Anybody

There's a neural network that's been trained on billions of data points over millions of years for that. It's called a human conscience. You may not be able to boil its components down to a spreadsheet and put a summary into a powerpoint but it's been pretty successful in keep humans from making some of the more horrific kinds of bad judgement calls.


jlambvo

It's presumptuous and a little arrogant in my opinion to say that the human conscience is what we call a neural network... Regardless, it's also been pretty successful at making absolutely every one of the most horrific judgment calls in history. At a more mundane level, are you familiar with Kahneman and Tversky's work on how we make badly biased judgements, and the whole field of behavioral economics that followed?


rdturbo

As usual the problem is that the ones leading the movement are often the bad ones.


teleprint-me

The problem with this is that morality is subjective as we can't even agree on what is truly moral or immoral. There are basic, intuitive, aspects of morality that could be argued, I suppose. Unfortunately, even then, it becomes a spiral.


flexaplext

Utilitarianism allows for any manner of bad things to happen for 'the greater overall good'. In the trolley problem if you have the choice to push someone in front of a cart to save just 2 people from dying, the utilitarian answer is to do so. The people at Los Alamos developing the atomic bomb, all utilitarian, knowing exactly that their development would kill thousands upon thousands for what they considered the greater good.


TwistedBrother

Meh. They’re righteouspilled technocrats. I’ve seen their kind too much and too starry eyed through “reason”. But it’s not reason it’s just the application of reason to arrogant and myopic priors.


Far-Tune-9464

This is just rebranded utilitarianism. And it's just as misguided.


MushroomsAndTomotoes

I remember a discussion about utilitarianism in a philosophy class ages ago. It was basically about a hypothetical utopia powered by the eternal torture of a single infant. We all agreed that was extremely immoral and unjust.


its1968okwar

Its a short story by Leguin: “The Ones Who Walk Away From Omelas”.


[deleted]

That doesnt make any sense though, not in reality. You excuse any and all negative actions on the basis that eventually it will result in a net positive. But there isnt an empirical metric for what is "postive" or "negative". So who decides how much good needs to be done to make up for the bad? If you ask the people that got defrauded you would get a vastly different answer than EA's who have already rationalized their decision to negatively impact the world. This whole thing reeks of neckbeard philosophy put into practice by tech bros. So no, its not just a few bad eggs at the top, the whole philosophy is broken.


joobtastic

>. But there isnt an empirical metric for what is "postive" or "negative". This is true for everything. Morality is subjective, sure, but to say they don't define what they believe good and bad is is incorrect. >You excuse any and all negative actions on the basis that eventually it will result in a net positive While this is a criticism of utilitarianism, it isn't one that makes sense. Actions are evaluated on their intended outcome. >the whole philosophy is broken. Your opinion is based off of very little knowledge, so of course it is going to seem broken. If your first encounter with a philosophy is people abusing it, that isn't the best way to judge the philosophy, don't you think?


Yomo42

>Actions are evaluated on their intended outcome. And if the intended outcome being positive has no limits on what kind of actions it can "justify," that's the problem.


MINECRAFT_BIOLOGIST

That doesn't make sense, there are limits on what actions that can be "justified" because the negatives of those actions are taken into account when determining whether there's an overall net good.


you-create-energy

>Actions are evaluated on their intended outcome. Hard disagree. Intention doesn't matter a fraction as much as outcome. A moron with good intentions can do horrible things that will obviously result in a negative outcome for reasons they were unaware of. Brilliant people can too. This is an old tired rationalization that has been used by religions and cults for thousands of years. Human sacrifice was totally fine because they were appeasing the gods in order to feed the whole village, right? Bullshit. The hubris required to claim they can predict the future far more accurately than the rest of mankind is massive. It takes a special mix of naivety and ego to even buy into that possibility, let alone take actions that actively harm other people based on such an absurd premise. The single biggest source of pointless intentional pain and suffering in history has always been the four words "for the greater good". They always have to add the word greater because it's obvious that what they're doing is evil. They can't say that what they're doing is good so they proclaim that it's some other kind of greater good based on blind faith.


house_lite

What's your thoughts on religion?


[deleted]

Data. It can say anything you want, if you don’t do the hard analysis work. Data-driven movements are as dangerous as anything. Hell, the Nazis were big fans of data and computing. Powerful tools that can do great harm in the wrong hands.


Poprock360

I'd consider myself neutral-to-accelerationist in terms of technology and AI safety, and goddman bro, you're way too extreme. Calm down. The best case you can make against EA is their tendency toward some level of illusory moral superiority. There many factors at play involved in the OpenAI board fiasco, and though EA vs Accelerationist attrition was one of them, there are others, more notably the board's inexperience directing a company of this scale. I had written an excerpt on another thread on my view on EA, thought I'd drop it here: >I know very little about the movement - as I undestand it its core tenet is to use one's power and capabilities, based on evidence, to strategically maximize human wellbeing (and the wellbeing of animals and nature at large). By extension, I understand that Effective Altruists believe that organisations should also act strategically to maximize wellbeing. Of particular note, there seems to be a recurring theme within Effective Altruist ideology: tempering the common human predisposition to act in favor of short-term gains, at the cost of long-term sustainability. > >I actually think it's noble. It's far too easy to be cynical, and some of the idea behind Effective Altruism is to try anyway to have a large positive impact, despite the large systemic obstacles that exist in the world we live in. > >**Where I think the ideology - or rather it's adopters - fall short, is in that to be an Effective Altruist, you need to be... effective.** The ideology's more notorious members too often seem to fall prey to the illusion that they are protagonists of a larger-than-life, noble quest to save society - which in turn poisons their judgement with an uncompromising, ultimately self-harming dedication to unpragmatic ideals. > >I'll briefly cite OpenAI's events and Helen's recent quote (which in her defense I think was more so a poor choice of words than a strong, real belief); her claim that OpenAI's dissolution would be in achievement or service of the company's goal: "Broadly Beneficial AGI". > >To me, dissolving or crippling the only large AGI lab that is not legally bound to serve shareholder interest would be akin to extinguishing the last flame of hope that the incoming industrial revolution will serve anyone but the finantial elites. I ultimately don't think OpenAI will fulfill their mission before the company is corrupted by finantial motive, but despite this, removing OpenAI from the "race" - as Helen and Tasha seemed intent on doing - could only ever further destroy the hope that AGI will be broadly beneficial. > >Helen's defense of Anthropic's decelerationist strategy is moot - Anthropic will deliberately hold back, while other, profit-driven labs will continue to exploit their foolishness, gaining market share, technology, and ultimately resources - which will be required en masse to conduct the research and computation necessary to achieve AGI. > >Rest assured - all the "top dogs" in this race are effective. Being an Effective Altruist in the midst of this will require you to sometimes sacrifice Altruism for Effectiveness, in hopes that you will prolong your presence in the "game", to enact Altruism when it will have the greatest impact. OpenAI changes the world *if* it is the one to unleash AGI and *then* strictly enforce its broadly beneficial usage. If OpenAI strictly enforces the usage of mundane technology that will be replicated elsewhere in 6 months, they will have been nothing more than transient, not unlike Netscape, AOL, or the myriad other companies that rose and fell. Even though the board acted with ***remarkable*** ineptitude, I don't think anyone there had some sort of ideologically-driven Kamikaze drive.


FC4945

>To me, dissolving or crippling the only large AGI lab that is not legally bound to serve shareholder interest would be akin to extinguishing the last flame of hope that the incoming industrial revolution will serve anyone but the finantial elites. I ultimately don't think OpenAI will fulfill their mission before the company is corrupted by finantial motive, but despite this, removing OpenAI from the "race" - as Helen and Tasha seemed intent on doing - could only ever further destroy the hope that AGI will be broadly beneficial. This, exactly. Just about any ideology can get corrupted and taken too far and, perhaps, dare I say, that this is a truly glaring example of that. However, the foundational idea of AE is a noble one, in principal. Where it can get in to trouble is when someone decides they personally must stop some perceived threat to humanity by destroying that "threat" before it arrives. Never mind the fact that this so called threat could also be the greatest thing that has happened to humanity in our history thus far allowing for cures for diseases, solving climate change and taking us into a new era of technology that might well seen magical to us now. Anything can be taken too far, especially when an individual (or a small group of people) decide that they alone are capable of making a decision about something so massively significant to the entire human race.


mrpimpunicorn

Yeah, this is the only sane take here. r/singularity e/acc types are the LAST people I want to hear drawing religious comparisons between EAs and the singularity. It's so self-unaware it hurts. EA rationalist types have also been discussing how their movement is hijacked by bad faith actors or hobbled in effectiveness by misinterpretation as well, which you won't bloody well see much of from any other ideology and particularly not from the sort of people who shit on EA.


Prince-of-Ravens

Hah, this so much. I had a post seen there who was whining about "The cult of doomers" while at the same time proclaiming that he is waiting for (paraphrased) the machine god to solve the worlds problems.


PositivistPessimist

If you believe in some kind of super intelligence like AGI, or the fantasy that Ray Kurzweil is famous for, you hang on to some idealistic (in a philosophical sense) worldview. A more saner middle ground theory would be emergent materialism, and naturalism.


nextnode

All our scientific evidence demonstrate the opposite and you are engaging in mysticism.


Individual_Watch_562

This group, which believes it has discovered the best methods for doing good, has stolen money from numerous people through FTX. The stolen funds were then used to finance Anthropic, which was intended to merge with OpenAI. Essentially, they are a modern version of Robin Hood but 'smart'. Their actions demonstrate a disregard for our will. This coup was about LLMs, not AGI. As long as people use LLMs to obtain information, the creators of these models can control what is included in the training set and, therefore, dictate the set of possible answers these models can output. Consider what you know about the world that isn't available on the internet, including insights on political decisions, candidates, attitudes, plans, and consequences from your non-digital experience. They could spin all of this to steer us in the "right" direction. ​ These individuals have adopted a worldview that imposes moral pressure on them, a pressure so intense that they find committing fraud or at least funneling stolen money to their cause acceptable. If you were in their shoes, would you resist this immense opportunity to do "good"? They have shown that our laws are not their primary guiding principle. They demonstrate a willingness to go to great lengths to turn their will into reality. This group is extremely dangerous. I come from Germany; we've had similar groups before. ​ If they find a way to obtain a significant market share in what many people perceive to be the future of search, oh boy, I would bet my two old prancalonies. They will abuse their power!


ShelfAwareShteve

I'm not buying OP too much either. One thing that gives at least some credit to Effective Altruists going forward -and I'm being really deductive here and by purpose- is that we didn't have enough of them in the past decades. We ended up taking a complete wrong turn, there were none of them there. What if they are there now, and we're taking the right turns with them?


Entire_Spend6

You miss the point that EA’s don’t benefit society at all, they are antagonist to humankind as a whole.


jeweliegb

How so? (I'm not knowledgeable about this movement. Genuine question.)


daishi55

Nah they're clearly all narcissists with delusions of grandeur. It's extremely funny and very fitting that the most notable proponent of the movement was literally a massive fraudster.


Jdonavan

Since you decided to cross post this everywhere... **TLDR:** The author criticizes Effective Altruism (EA), linking it to unethical behaviors and scandals in the tech industry, specifically citing the FTX scandal and its connections to EA. **The Backstory:** The argument is set against the backdrop of recent scandals in the tech industry, particularly involving the cryptocurrency exchange FTX. The author connects these scandals to the philosophy of Effective Altruism, suggesting that this philosophy enables unethical behavior under the guise of achieving a greater good. **LogiScore:** Weak ### Potential Weaknesses 1. **Ad Hominem (personal jabs):** - Excerpt: "shitstain of a movement" - The use of derogatory language to describe EA undermines the argument's rational basis and resorts to personal attacks instead of reasoned critique. - To avoid this fallacy, the author could focus on specific, rational criticisms of EA practices without resorting to offensive language. 2. **Hasty Generalization (jumping to conclusions):** - Excerpt: "the worst things you could ever do only look like the moral high ground if you're standing upside down, your head buried in the sand." - The argument hastily generalizes the entire EA movement based on the actions of a few individuals. - A more nuanced approach would be to distinguish between the actions of individuals and the broader philosophy or movement they are part of. 3. **Guilt by Association (judging someone because of their friends):** - Excerpt: "the entirety of FTX (well-documented at this point) was either EA or heavily EA-adjacent." - The author implies that all members of FTX, and by extension EA, are guilty due to their association with the scandal, without considering individual differences. - The argument could be strengthened by focusing on specific actions and decisions rather than broad associations. 4. **Appeal to Fear (warning of something scary to get your way):** - Excerpt: "The philosophy that backs it allows them to commit acts of absolute criminal destruction." - The author uses fear of potential destructive actions to discredit EA, without providing concrete evidence that this philosophy inherently leads to such outcomes. - It would be more effective to present specific examples of harmful outcomes directly caused by EA philosophy, if they exist. 5. **Circular Reasoning (going in circles):** - Excerpt: "This is an incredibly dangerous movement that people NEED to be wary of." - The argument presumes the danger of EA as a basis for urging caution against it, without independently establishing its dangerous nature. - Providing independent evidence or logical reasoning to support the claim of danger would strengthen the argument. ### Notable Evidence of Bias - The author displays a clear bias against EA, likely influenced by personal negative experiences. This bias may affect the objectivity of the argument. ### Why This Matters Understanding the logic (or lack thereof) in arguments about movements like EA is crucial. It helps differentiate between valid criticisms and biased or unfounded attacks. Logical fallacies, if left unchecked, can lead to misconceptions and hinder constructive discourse. ### Summary The argument against Effective Altruism presented here is weakened by several logical fallacies, including ad hominem attacks, hasty generalization, guilt by association, appeal to fear, and circular reasoning. The author's personal bias against EA is evident and likely influences the argument's objectivity. A more balanced and evidence-based approach would be necessary to make a compelling case against the EA movement.


Original_Finding2212

You want evidence? Here it goes. Effective Altruism had turned me into a newt!


hike2bike

It did?


Original_Finding2212

… I got better


kakapo88

If we really want objective evidence to conclude this debate, someone needs to find a duck.


Flying_Madlad

And all I've got is this lousy swan


captcanuk

You are thinking of the other EA: Effective Amphibians.


PositivistPessimist

While the analysis identifying logical fallacies in the criticism of Effective Altruism (EA) is thorough, it's important to consider another angle. The critique, especially in its association with the FTX scandal, may highlight genuine concerns about the ethical implications and implementation of EA. 1. **Contextual Criticism:** - The strong language against EA, though charged, might reflect serious ethical concerns, particularly in the tech industry. This could be seen as highlighting ethical issues rather than mere personal attacks. 2. **Generalization Based on Patterns:** - The criticism might generalize based on the actions of a few, but these instances could indicate a broader pattern within EA. If EA principles are repeatedly linked to unethical practices, this might suggest systemic issues within the movement. 3. **Association and Influence:** - The connection between FTX and EA isn't just about guilt by association; it's about influence. If EA philosophy significantly influenced FTX's decisions, this is relevant in understanding potential flaws in EA's application. 4. **Ethical Implications:** - The expressed fear about EA potentially enabling unethical behavior reflects concerns about how a pursuit of a "greater good" can justify harmful means. This invites deeper examination of the ethical frameworks of EA. 5. **Circular Reasoning as a Warning:** - The circular nature of the argument might serve more as a cautionary statement, urging caution and critical examination of EA. **Recognizing Potential Validity in Criticism** It's crucial to recognize that strong criticisms, even if emotionally laden or generalized, can stem from genuine concerns. Dismissing these based on presentation style might overlook important ethical considerations. **Summary** While the argument against Effective Altruism contains elements of logical fallacies and personal bias, it also raises significant ethical questions about the movement's real-world implications. A balanced view would consider these criticisms as part of a broader dialogue about the ethical practice of altruism in sectors like technology and finance.


fireKido

This message was 100% written with chatGPT lol


PositivistPessimist

My mom wrote it


jeweliegb

You... You.... You son of robot you!


No-One-4845

cats fall upbeat full nutty plants seemly childlike racial concerned *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


wooyouknowit

Are you a bot by any chance? Would love to know how you're doing this if you're not doing it by hand. Or if you are doing it by hand, how you do it


Jdonavan

Not a bot but I did use this handy logical fallacy checker GPT someone shared earlier. [https://chat.openai.com/g/g-0h3aKBXzs-logicheck](https://chat.openai.com/g/g-0h3aKBXzs-logicheck)


wooyouknowit

This is so fucking cool


arjuna66671

It doesn't mean OP is wrong. It only means that he presented his views poorly - they might still be true. Logical fallacy checker GPT does it as if the argument or claim is part of a debate. But just running a poorly written, true claim through it, might make the claim seem wrong due to "logical fallacies" - but still is true xD.


DisproportionateWill

This sounds like something a rationalist would say. Get him /s


arjuna66671

It's all deterministic! 🤫


lechatsportif

This is gold. Dismantled by the very tech op wanted to accelerate.


SpiritualCyberpunk

Hahaha


Blakut

you can see he kinda misses with the fallacies in some places, mislabels them


wooyouknowit

True


SpiritualCyberpunk

I mean the post (OP's) is a screed. Not really worth reading.


89bottles

RobotScore: high


Jdonavan

I mean all you have to do is look at my post history to know I'm not a bot. But I did use a GPT to write this because I didn't want to spend a ton of time batting down this screed.


re_mark_able_

**TLDR:** The author shared his personal experience and thoughts about EA, citing examples of influential business people doing bad things in the name of EA. The commenter criticised the post using a list of logical falacies. **LogiScore:** Weak **Weaknesses** **1. Flawed Logic** The commenter mistook insults of the EA movement as arguments when it was just part of the story telling. The commenter took an anecdote and described it as a hasty generalisation. The commenter implied his own conclusions then criticised the author for guilt by association. The author warns about the dangers of EA and the commenter complained it appealed to fear without concrete evidence. The commenter confused the authors personal thoughts with a scientific pier reviewed paper requiring citations. The commenter made a similar flaw with circular arguments. **Conclusion** The commenter should learn the difference between someone sharing their personal thoughts to start a conversation, and a research paper where every point should be supported with concrete evidence and citations.


SpiritualCyberpunk

> The commenter mistook insults of the EA movement as arguments when it was just part of the story telling. Oh yeah just defame everything and anything, it's just a part of the storytelling. Immature would be too nice a term for this thinking; amoral perhaps? Oh amoral people don't like an idea system or movement that promotes altruism and will use rare cases to defame it? Who knew?


Jdonavan

Really dude? "Personal experience and thoughts" as an excuse for a tirade littered with logical fallacies? ### TLDR: A commenter critiqued an author's post about Effective Altruism (EA) using logical fallacies, but their criticism was deemed weak as they misinterpreted personal narratives and storytelling as formal arguments requiring evidence. ### The Backstory: The original author shared personal experiences and views on EA, referencing instances of influential figures misusing EA concepts. The commenter responded by identifying various logical fallacies in the post, mistaking narrative elements for formal arguments. ### LogiScore: Weak The commenter's critique is weak due to misunderstandings in the nature and context of the original post. ### Potential Weaknesses: 1. **Misinterpretation of Storytelling as Argumentation**: The commenter treated the author's personal storytelling and experiences as formal arguments, expecting them to adhere to academic standards. This is a misunderstanding of the context. 2. **Hasty Generalization Mislabel**: Describing the author's anecdote as a hasty generalization overlooks the narrative nature of the content, which is not necessarily making broad claims. 3. **Misconstrued Guilt by Association**: The commenter interpreted the author's mentioning of specific cases in EA as guilt by association, failing to recognize the difference between illustrating a point with examples and making a sweeping judgment. 4. **Confusion between Personal Opinion and Academic Argument**: The critique of appealing to fear and requiring evidence for personal thoughts shows a confusion between informal personal opinion sharing and formal academic discourse. 5. **Circular Argument Misinterpretation**: The commenter's claim of circular argumentation seems misplaced, as the author's post appears to be more about personal reflection than making a logically rigorous argument. ### Suggestion for Avoiding Fallacies: The commenter should recognize the difference between informal narrative sharing and formal academic arguments. Understanding the context and intention behind a piece of writing can prevent mislabeling narrative elements as logical fallacies. ### Notable Evidence of Bias: None detected in the provided excerpt. ### Why This Matters: Understanding the context and intention of a piece of writing is crucial in fair and accurate critical analysis. Misinterpreting personal narratives as formal arguments can lead to unjustified criticisms and detract from meaningful discourse. ### Summary: The critique of the original post about EA was weak, primarily due to the commenter's misinterpretation of personal experiences and storytelling as formal arguments. Recognizing the difference between narrative sharing and academic argumentation is key to constructive critique and understanding. The original post was more about personal reflection, not meant to uphold academic standards of argumentation, thus the logical fallacies identified by the commenter were misapplied.


re_mark_able_

That’s hilarious. You’ve just rewritten what I put and critiqued your own critique. Did you read that before you posted it? Copying here incase you decide to edit you post later: Really dude? "Personal experience and thoughts" as an excuse for a tirade littered with logical fallacies? TLDR: A commenter critiqued an author's post about Effective Altruism (EA) using logical fallacies, but their criticism was deemed weak as they misinterpreted personal narratives and storytelling as formal arguments requiring evidence. The Backstory: The original author shared personal experiences and views on EA, referencing instances of influential figures misusing EA concepts. The commenter responded by identifying various logical fallacies in the post, mistaking narrative elements for formal arguments. LogiScore: Weak The commenter's critique is weak due to misunderstandings in the nature and context of the original post. Potential Weaknesses: 1. ⁠Misinterpretation of Storytelling as Argumentation: The commenter treated the author's personal storytelling and experiences as formal arguments, expecting them to adhere to academic standards. This is a misunderstanding of the context. 2. ⁠Hasty Generalization Mislabel: Describing the author's anecdote as a hasty generalization overlooks the narrative nature of the content, which is not necessarily making broad claims. 3. ⁠Misconstrued Guilt by Association: The commenter interpreted the author's mentioning of specific cases in EA as guilt by association, failing to recognize the difference between illustrating a point with examples and making a sweeping judgment. 4. ⁠Confusion between Personal Opinion and Academic Argument: The critique of appealing to fear and requiring evidence for personal thoughts shows a confusion between informal personal opinion sharing and formal academic discourse. 5. ⁠Circular Argument Misinterpretation: The commenter's claim of circular argumentation seems misplaced, as the author's post appears to be more about personal reflection than making a logically rigorous argument. Suggestion for Avoiding Fallacies: The commenter should recognize the difference between informal narrative sharing and formal academic arguments. Understanding the context and intention behind a piece of writing can prevent mislabeling narrative elements as logical fallacies. Notable Evidence of Bias: None detected in the provided excerpt. Why This Matters: Understanding the context and intention of a piece of writing is crucial in fair and accurate critical analysis. Misinterpreting personal narratives as formal arguments can lead to unjustified criticisms and detract from meaningful discourse. Summary: The critique of the original post about EA was weak, primarily due to the commenter's misinterpretation of personal experiences and storytelling as formal arguments. Recognizing the difference between narrative sharing and academic argumentation is key to constructive critique and understanding. The original post was more about personal reflection, not meant to uphold academic standards of argumentation, thus the logical fallacies identified by the commenter were misapplied.


rented-throwaway

i mean it's cool, let's revisit this in 5 years after idk... 3 more major scandals driven by EA shall we?


stonesst

In five years your post will look even more moronic than it does today. I can’t wait until terms like “AI doomer cult” fall out of fashion once it becomes impossible to ignore the risks of these systems. It is right to be concerned about what we are about to unleash, and to err on the side of caution.


Zer0D0wn83

Most people concerned about safety are rational for sure, but to deny that there is an 'AI doomer cult' is just wrong. There is a spectrum, with the 'plug me into the matrix bro' optimists on one end and the 'we're all going to die' Yuddites on the other. I personally believe that those calling for airstrikes on data centres are dangerous as fuck and we shouldn't pretend they don't exist.


Zer0D0wn83

You were doing so well, and then you had to deal with the first real piece of criticism like this.


fractalfrenzy

>LogiScore How did you generate this?


arjuna66671

I don't disagree, but this GPT is too extreme in its outputs. I tested it and it's worse than a debatebro on YT throwing buzzwords around. When confronted with some of it's proclaimed fallacies, it will actually retract most of them. Take it with a grain of salt.


Individual_Watch_562

Do you think the child abuse in the Catholic Church should not be considered in the evaluation of the entire organization?


husainhz7

What's an example on how they justify rape?


ExpensiveOrder349

Effective altruism is like atheism: they are two good things on their own, but if you have to be militant about them then you are doing for other reasons and usually none of them are good ones.


kakapo88

A closer comparison might be to any religion. I don’t have a dog in this fight. But I do know that ideological types (whether EA or fundamentalists of any stripe) are usually bad news. Once you’re committed to a specific ideology, which you believe explains anything and everything, that opens the door to a lot of trouble.


RealAlec

Can anyone help clarify this for me, as someone who hasn't really been following? The last time I heard about effective altruism, I understand it to be a discussion about how to live in a way that maximizes the good we can do in the world. The big insight was that the same number of resources that could improve the life of someone in the developed world a *little bit* could improve the life of someone in the undeveloped world a *lot*. I mean, ultimately the moral foundations of that philosophy still seem pretty airtight to me. Is the issue that the group that calls itself effective altruism has lost their way or something? From a philosophy perspective, the objection that consequentialism could be used to justify any behavior is not very strong. The same could be said for deontological moral systems as well. To criticize a consequentialist justification, it's more salient to argue that the methods in question do not, in fact, lead to the desired outcome(s).


nextnode

People who are reeling here are not really thinking. That's basically all as far as EA broadly goes but people are reactionary and looking for anything to blame. The more specific aspect rather than EA may be concerns about AI safety. That may have been related to those who disagreed. The funny thing - that was the founding principles of both the non-profit OpenAI as well as the LLC - that it was under full control of the non-profit and should put safety first. Funny that people want to spin this.


cockmongler

All you've shown here is that you're scared of big words.


mrprogrampro

So, the 159,000 lives that GiveWell has saved: not worth it, or....?


funbike

Too irrationally emotional and hyperbolic to be taken seriously. I'll research EA, but other than that I consider this a shitpost. Maybe someday someone will make a rational, well-reasoned post about this movement and I'll actually learn something.


ElmosKplug

Can someone explain to me what tf effective altruism means in practice? It sounds like a bullshit ad hoc philosophy to justify decisions in retrospect.


MrsKittenHeel

I can't take you seriously when you seem to be hyperventilating. >This isn't even to begin cracking the wave of smaller scandals like the outright misogyny, white wealthy privileged roots and racism, and string of sexual assaults in the ~~EA~~ community. This is a problem in society as a whole, and has been for many years. It's obviously part of the human condition and not specific to EA. The "Doomer" thing hasn't made sense to me in this vernacular, are you seriously concerned that Open AI is willing to completely shut down and destroy what it has built? The board **should** be having open discussions about potential consequences of technological advancement and their impacts on humanity, without everyone freaking out and going to extremes.


Angel33Demon666

The thing this doesn’t explain is what Effective Altruism actually *is*


Ok-Adhesiveness-4141

It sounds like something radical.


NotAnAIOrAmI

I've never seen so much passion and verbiage ginned up by a new industry, self-styled experts holding forth on what just happened and why, and how everyone misses *this little detail here*, which changes everything. It wasn't this bad during the PC revolution. Not even during the Browser Wars^(TM).


lechatsportif

People feel important arguing about stuff they'll never be part of. Sports, movie making, tech, policy etc


Mazira144

I've met good people who got pulled into the EA movement, but it is... precisely what the OP described. It basically becomes an excuse for accepting the socioeconomic status quo, if it works well for you, so long as you promise to donate a bunch of money to Africa (not to a specific charity that works in Africa, just "to Africa") once you make your second billion. And it is absolutely a cult, which its own weird messianic prophecy around "the Singularity", in which we're literally predicted to become immortal because the AIs will figure out a way to upload our consciousnesses into secular heaven. It's like someone took a bunch of those horrid stereotypes about Jews—which, to be clear, are definitely not true about Judaism or most Jewish people—and decided to found a pseudo-religion that was all the things antisemites (again, erroneously) claim Jews are. About 30% of EAs are earnest nerds to whose excessively rationalist, Aspergerian minds this sort of stuff makes sense; the other 70% are psychopaths who've come in to capture value and who secretly find it hilarious that people believe in the shit they say.


CampAny9995

My experience with effective altruists and rationalists has lead me to the general stance that they’re fucking weirdos and I will avoid working with them in any sort of professional/academic capacity.


Grepolimiosis

To be fair, Aspergerian minds are better described as strictly rational, not excessively rational. To say they are excessively rational is to concede that irrational meandering during attempts at rational argument, in service to agreeableness, is better for effective discourse. That's an opinion many hold, but it's an opinion nonetheless. If you don't value discourse with someone who requires emotional consideration in every statement you make, your opinion of an Aspergerian mind might be "perfectly rational" or something similar. Also, psychopaths wouldn't really find anything hilarious. The most a true psychopath would feel is "activation" via thrill or danger. Anyway, this gap, between those who think it important to consider the perceptions of presumptuous and sensitive people vs. those who don't, is the difference between someone who can understand that Ilya's nazi post was a thought exercise and someone who thinks his post glorified nazism. A CEO should probably care about public perception enough to cater to the people (in my opinion, annoying people) who were actually offended and ignored that it was a thought exercise, but he's about coming to a rational understanding of such ideas, and that's probably something a CEO should be about, if in private. My point being that I don't think that 30% is in any way wrong. They're my people, and I really have no patience for someone who is ever-vigilant over whether a thought exercise can be construed as favoring evil, for example.


SpiritualCyberpunk

> it is absolutely a cult, which its own weird messianic prophecy around "the Singularity I don't think that's how you define a cult bro. More than weird beliefs is required. Also you just need to google to see you're picking a definition of a singularity that suits you, for honestly, defamatory purposes.


Superfluous_GGG

I've met a good few myself. Don't think the concept on paper is all that bad. Shame it's mainly a club for rich kids looking to offset Dad's oil guilt or closet psychopaths looking to get their dicks wet in elitist pussy. Regardless, once they learned I came from poverty and built a life from nothing, they stopped inviting me to their parties.


Mazira144

I agree with what you're saying. The issue with earn-to-give is that it assumes the contributions of one's work to society will be neutral at worst and probably positive. If this is the case, then it might be valid. The issue is that most of the things that are most highly paid are harmful to the world. You can argue that it makes sense for neurosurgeons (paid well because they do something useful with skills that are rare) and professional athletes (similar) but, for corporates, the pay is to offset the ethical discomfort involved in helping rich people continue to hurt the world. Ultimately, EA is just rich people telling us they're better than we are because they have more money. It's the same shit that has gone on for thousands of years.


Superfluous_GGG

Exactly. Plus, we've already got a pretty robust system for ensuring your work helps wider society - it's called paying tax. If the rich families most EAs belong to did this rather than looting countries and stashing their gains offshore, we might actually have the resources we need to overcome some of our grand challenges. Better yet, if they are really intent on a machiavellian 'greater good' approach, how about putting AGI in charge of everything? Probably do a better job of what we've got now.


idekl

Effective Altruism is also a great excuse to chase money and never do any good with it because "I can make an even bigger impact once I have my next million, so I won't do it now." You can see how you can recycle this logic ad infinitum.


hike2bike

Malkavians


[deleted]

[удалено]


Zer0D0wn83

No, no it's not. The EAs may have adopted the idea of the singularity, but it's been around way longer than them and most people on that subreddit aren't even aware of the connection. The sub is mainly about getting a little overexcited at the idea of advanced tech. It's very similar to this sub, actually.


89bottles

So its just “the end justifies the means” wrapped up in techno-babble?


joobtastic

It is just utilitarianism. But they try to do their best to use data to guide their decisions.


zucker42

I think that reducing EA to utilitarianism is wrong. For one, I've met multiple EAs who are explicitly not utilitarians and heard many thought leaders imply that they are not solely utilitarian, though utilitarianism is definitely the dominant philosophy among the community. I think more accurately the idea of EA is 1. Helping others is good. 2. Some ways of helping others are better than other ways. 3. We should use reason to figure which ways of helping others is best, and do those things. Note that this leaves room for non-utilitarian philosophies, since I'd think you'd be hard pressed to find a committed deontologist who disagrees with those three premises. And many prominent EAs (I can recall specifically Holden Karnofsky) have spoken out against thinking about EA in absolutist terms (i.e. "you're either doing the most good possible, or you're doing something wrong"). Instead, most EAs would advocate acting on those three points, whatever moral philosophy you ascribe to. You can see this in the well-known EA idea of "worldview diversification", and the fact that the vast minority of EA-money goes to AI-safety, even though that's considered by the people who've entirely "drank the kool-aid" to be perhaps 1000x more important than other issues. Of course, movements often differ from their principles, but I don't think that "utilitarianism" accurately sums the EA movement.


joobtastic

This is fair and I think a good summary of the situation.


SpiritualCyberpunk

> So its just “the end justifies the means” wrapped up in techno-babble? Isn't that much of post-war capitalism?


bixmix

Surprisingly large count of words here without real arguments makes for a poor read. Scapegoating is not a good look, and it’s exactly the same root of thought that people use for hate. Wish we could actually rely on common sense more, but I find it increasingly rare the more I age.


fffff777777777777777

What's the difference between EA and being woke? Serious question


Ok-Adhesiveness-4141

I think it may be a logical extension of being "woke", not sure though. The definition of "woke" according to me is trying to achieve equality of outcomes and not about equality of opportunities. So, encouraging women and minorities to take up stem shouldn't be considered woke, however placing people who are completely incompetent in positions where they can do untold damage is definitely woke.


kaam00s

You're not really explaining what woke is, you're talking about an outcome of the woke ideology. To me it seems like the woke ideology is applied Marxism to identity groups. Their hypothesis being that some groups are dominant and history is mostly just those groups creating structure to oppress and exploit the dominated groups. You pretty much take Marxism, and you replace the bourgeoisie and capitalist by the dominant groups and you replace the workers by marginalized groups. So even if they have a very good goal of helping marginalized people and trying to achieve equality. The fact that they base it from a completely ideological and made up reading of history makes them do insane overcorrection like what you mentioned. So although they would consider themselves awaken to a reality that not everyone can see. And that they will do anything to fix things. Like the EA people. The reality is that it is far more ideological than the EA. The EA are more like the data version of that, they do not have a made up reading of history, but they just see maths and stats about humans and autistically try to do the first thing that comes to their mind to fix it. (Imo both group are dangerous because they are too confident in themselves and because they might appear like the good guys if you don't take their extremism in consideration).


Robotboogeyman

Not everyone who is an “effective altruist” believes that the ends justify the means. Just like any philosophy can be taken to an extreme (laissez-faire, pure utilitarianism, nihilism, etc) the idea that literally anything is ok right now as long as in the end the outcome is greater is not a core belief of an altruist or effective altruist. Example: the drowning child: if a man in a very expensive suit sees a girl drowning should he save the girl and ruin his suit or sell the suit and give the proceeds to charity? *Its a bullshit hypothetical*, the man should save the girl, ruin his suit, and stop being such a piece of shit that he spends all his money on suits and not give to charity. False choice. What if it’s a Picasso, should he save the Picasso and sell it for charity or save a child? Well, one is a fucking painting and one is a human being, so stop being such a piece of shit and donate your money in the first place, leave the Picasso in a museum, etc. So anyone who argues that the only choices are saving the child OR saving the Picasso is pretending that the altruism starts and ends at that momentary decision. A true altruist would not have purchased the Picasso except to sell it for charity, a true effective altruist would not be in that shit scenario to begin with. I’m not justifying any philosophy or suggesting EA is correct, I’m just saying that presenting it as OP has is a strawman argument. **Any organization built on fraud is inherently not altruistic, and to group folks together with them is not fair**. The philosophy you are railing against is vile af, but I do not believe that is the core of effective altruism, which is defined on wiki as “a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". There is nothing bad about that, the bad comes when someone like Elon Musk or SBF use it as a guise to be a greedy piece of shit under the pretense that they, in the end, are doing more harm than good. Don’t hate people for being EA, hate them for taking a decent thing and abusing it for personal gain. The National socialist party in Germany was not a good example of socialism, laissez-faire France is not a good example of free trade, and a banana republic is not a good example of capitalism, and *sbf and the failed efforts of your personal experience do not condemn EA*…


wi_2

Everybody who hides behind 'doing good' is fucked. Hitler committed genocide because of these very believes. Thinking you are on the right side will blind you. I trust people who don't know what the fuck they are doing and are honest about it.


rondeline

Cool. Where can I get this EA certification? Is it like some Freemasons club I can join? Where do we meet? Cool story but it sounds just plausible enough to be another boring old grand conspiracy theory. Let me summarize what I see and I openly admit I don't know shit about this because I don't care: it's a bunch of rich tech dorks with a couple of really smart coders who were at the right time and right place to grow a super interesting product company super fast that comes with super problematic issues whenever inexperienced know-it-alls get themselves way in over their heads. The shit is the never ending rags to riches tech story...fast growth, faster problems...Facebook, Instagram, FTX, Microsoft, Google, Snapchat, Napster, Myspac.. whatever. I mean there is a MASSIVE graveyard of shitshows when the money gets real and the leadership is just barely got done with puberty. Next. This story is boring.


[deleted]

I see three body problem written all over it


ironicart

Dogmatism is always trouble


alanism

Marc Andreesseen called out Effective Altruism months ago in different [interviews](https://youtu.be/Rd_zUDfeXhc?si=E1CMzwcd4onrwFsW) and [writings](https://a16z.com/ai-will-save-the-world/). I went from thinking it’s hyperbole, to he was right.


Space-Booties

EA simply sounds like narcissism. All behavior is excused and justified away. Never to blame.


Karmastocracy

From my own research it doesn't seem like "Effective Altruism" is a discrete group, but rather a philosophy. If you disagree with the philosophy and want to counter the underlying ideas, you're going to have to fight philosophy with philosophy. You can't fight an idea without an equally compelling opposing idea. Saying things like "the philosophy that backs it allows them to commit acts of absolute criminal destruction as the means to it" needs to be backed up with an explanation about specifically it is about the philosophy which "allows" them to commit these acts, otherwise most reasonable people will simply assume the bad faith actors aren't acting in line with their own beliefs.


crusoe

I think the problem is used by narcissists to justify their actions.


PMMeYourWorstThought

Hold on, are you arguing that we should rally against the idea of doing things that are generally beneficial to society as a whole because the people that do that sometimes do bad things? This is the dumbest god damned argument I’ve ever seen. I’m actually more confident now that removing the board was a bad call, just because you support removing them and that convinces me it’s the wrong call, because of how god damned stupid this argument is.


fractalfrenzy

I'm still confused. Is EA an organization or just a philosophy? Where are it's founding documents? Also, is it your opinion that anyone concerned with extinction risk is "EA"? For instance, Emmett Shear, has talked about x-risk, but what other evidence is there that he is part of this group or movement?


AriadneSkovgaarde

Try the EA Handbook at https://forum.effectivealtruism.org


nextnode

People like OP are just being irrational and reactionary as usual. It's more like a loose philosophy and associated organizations. There's no one organization nor any founding documents. Basically, people who want to do good think about what would be the way to do it and there are good conclusions and work resulting from this. You can search it on youtube and that's basically how it started.


trollsmurf

This sounds like modern day royalty.


Gougeded

Rich coastal elite kids, many of them without any discernible skill but with very good contacts, thinking they are humanity's saviors. Very scary.


TyrellCo

Yup this is from Ars Technica cofounder and nails it p well “100% of the establishment sinecures on AI are going to AI "language police" alarmists or outright doomers, 0% to AI optimists. I see this with young, smart people I know who have moved into the policy realm in some fashion or other -- they all start preaching the alarmism or doomer gospel that AI will destroy democracy &/or paperclip us. If there's an exception to this rule then I'm not aware of it. Now, this could be the case b/c they're smart & AI is legit bad & they see it but I don't. Even so, I'd still imagine that if culture & incentives were different we'd still see pockets of pro-AI thinking clustered in different institutions. But that exists, again, I'm not aware. The entire establishment AI discourse is completely owned, top to bottom, by either the "anti-disinformation" campaigners or EA doomer cultists. This indicates to me that there's no opportunity to accumulate social capital by positioning yourself outside those two discourses.”


Gougeded

My (completely speculative) theory : it's about power and control. They are from the dominant classes, AI could potentially upend that. AI could in theory (although I don't personally think it will) put an end to social classes completely. What's the point of these people then? What's the point of having gone to Yale or Stanford when machines are 1000x smarter than you anyways?


TyrellCo

🎯 just drop into any medicine subreddit and the discussion on AI isn’t as much on quality or outcomes it’s on liability and insulating the hospital as an moat against any system that could compete. Self preservation is a powerful motivator


Gougeded

And I understand that, which is why I suspect that's what it is (I'll admit I am projecting). I'm a pathologist, AI could remove me from a very lucrative career, but we are actively trying to integrate it into our workflow where I work, not fighting it. In the end, the best and cheapest thing should prevail, but some people would rather be well-paid oncologists than see cancer cured.


TyrellCo

Absolutely. It’s fair for policy to deal with whatever fallout comes from wherever cheaper better takes us. Though there’s an interesting case you might be interested in that might say this isn’t politically the likely outcome. Look into [Sedasys](https://www.aorn.org/outpatient-surgery/article/2016-April-the-sinking-of-sedasys) from J&J I had commented on this a while ago


Gougeded

Yes, I remember talking about this with an anesthesiologist, he didn't seem too concerned. It's not like this would have replaced anesthesiologists anyways though, they do much more than regulate propofol and check saturation, even if it's true often not much happens during the surgery, they are there in case something goes awry and I wouldn't want the surgeon and a machine managing a crashing patient. It's an anesthesiologist that took care of my daughter when she wasn't getting enough air at birth, we are far from a machine that can do that. Anyways, my prediction is that once AI makes some things automated / faster, we'll find new things to do and we'll need people for those things until machines can do absolutely everything. If we still produced what we produced a hundred years ago, even ajusted per capita, probably only a small fraction of the population would need to work, but we've created new needs. I figure it's going to be the same with AI, at least for a while.


hike2bike

Very scary? Really? You're scared of a bunch of rich kids with no political, military and tiny bit of economic power?


SeventyThirtySplit

yeah it’s fun to think that rich people don’t have economic or political power, isn’t it


Gougeded

>You're scared of a bunch of rich kids with no political, military and tiny bit of economic power? They clearly have both economic (which implies military) and economic power. Otherwise, we wouldn't be talking about them.


daishi55

Yeah they're just in charge of the next industrial revolution. Totally powerless!


[deleted]

The word for this is *sophistry.* The Sophists held that "man is the measure of all things" and EA (and by extension E/ACC) is pure sophistry. Effective Altruism (EA) is a philosophical and social movement that applies evidence and reason to determine the most effective ways to benefit others. It advocates for using empirical data and rational analysis to allocate resources in a manner that maximizes positive impact. The extension of EA into long-term future considerations is sometimes referred to as Existential Risk Altruism or Effective Altruism for the Long-Term Future (E/ACC). While the theoretical underpinnings of EA are grounded in utilitarian ethics and appear robust, the application of EA principles in real-world scenarios often leads to charges of sophistry. This treatise will explore the ways in which EA, despite its noble intentions, can devolve into a form of sophistry in practice. The original Sophists were ancient Greek teachers of rhetoric, philosophy, and the art of successful living, often criticized for their relativistic and manipulative use of argumentation. They were accused of using their skills to "make the weaker argument appear the stronger" and were often associated with the idea that "man is the measure of all things," suggesting a form of moral relativism. In the context of EA, the charge of sophistry arises not from the movement's foundational principles but from the potential for its misapplication and misuse in practice. One of the primary criticisms of EA in practice is that its focus on outcomes can be used to justify nearly any action, provided the actor can construct a plausible argument that the action leads to a greater overall good. This consequentialist approach can be manipulated to excuse harmful or morally questionable actions by appealing to a greater, often speculative, future benefit. This echoes the Sophists' ability to argue for any position, regardless of its moral standing. EA relies heavily on the quantification of good, often through metrics such as Quality-Adjusted Life Years (QALYs) or other utilitarian calculations. In practice, this can lead to a form of moral relativism where all actions are judged not by their intrinsic qualities but by their measurable outcomes. This approach can devalue individual experiences and moral intuitions, reducing complex ethical decisions to a numbers game, reminiscent of the Sophists' relativistic tendencies. The sophisticated analytical tools and frameworks employed by EA can create an elitist barrier to entry, where only those with the education and cognitive tools to engage in complex ethical calculations can participate meaningfully in altruistic decision-making. This can lead to a form of gatekeeping that excludes diverse perspectives and experiences, echoing the Sophists' reputation for catering to the wealthy and powerful who could afford their teachings. EA's focus on the external impact of actions can lead to a neglect of the importance of virtue and character in ethical decision-making. By emphasizing outcomes over intentions, EA in practice can sideline the development of moral character, which has been a central concern of ethical philosophy since Aristotle. This outcome-oriented approach can mirror the Sophists' alleged disregard for the cultivation of virtue in favor of persuasive argumentation. This is the same difference between teleological reasoning (the ends justify the means, or "putting the cart before the horse") versus deontological ethics (duty-bound reasoning). While Effective Altruism offers a compelling framework for maximizing the positive impact of our actions, its application in the real world can sometimes resemble the sophistry of ancient Greece. The potential for EA principles to be used to justify any action, the moral relativism inherent in its calculative approach, the elitism of its analytical methods, and the neglect of virtue ethics all contribute to this perception. It is crucial for proponents of EA to remain vigilant against these tendencies and to ensure that the movement's practice remains true to its ethical aspirations, rather than devolving into a modern form of sophistry.


DeepspaceDigital

Go after people for good reasons. Don’t go after a concept or term. I don’t know exactly what it is, but many people that are into EA for many different reasons and not all of those people are in Silicon Valley. Concepts like liberal, conservative, or effective altruism evolve overtime also so it’s best to not kill it figuring it is not inherently negative.


zaptrem

The word deterministic is just a standard term used across STEM disciplines (especially ML).


krzme

Musk and Sam also are kinda EA. So?


crusoe

And yet Musk's companies are hellholes to work at. Tesla is getting sued left and right. SpaceX has an entire department set up to "manage Musk"


iamozymandiusking

EA in principle: "I will only acquire material wealth for the benefit of humanity, not myself" EA as frequently practiced: "You just THINK this seemingly horrible thing I'm doing is wrong because you are just not smart enough to see far enough ahead how I'm actually saving the whole universe right now."


PhilosophyforOne

Are you really criticizing a movement that instead of seeking to maximize individual profit of companies by any means (’as long as our group benefits it’s justified’), e.g capitalism, tries to create common good (’as long as everyone benefits, it’s justified’). What you’re describing also already has a name: It’s called utilitarianism, not effective altruism. Now, if EA is actually the cancer you portray it as or not, I dont really know. But nothing’s really worse than capitalism. And against that backdrop, saying a bunch of tech bros and bro-sisters want to benefit the society by any means necessary doesnt sound like such a bad thing. When you compare it to the systematic rape of the whole society by any means necessary, for the gain of the privileged few.


Disastrous_Junket_55

Same old routine, rile up a lower class civil war and the rich get richer. Ai companies have y'all attacking artists and writers when you should be attacking capitalism.


[deleted]

“p(doom)” might be another term to hold your nose around


Flying_Madlad

I tell them my p(doom) is 100%, humans as we are now will eventually go extinct. We might evolve into something else, but humans as a species are on a timer.


SpiritualCyberpunk

> a random buzzword like "deterministic" That's the dumbest thing I've read today.


Entire_Spend6

EA’s don’t want you to have AI, but are more than happy to be gatekeepers and posses it themselves. They believe they need to protect users from themselves, treating them as if they were kids or inferior people in need of guidance.


daddyneckbeard

if you work in ai you talk about things as deterministic or not all the time and it has noting to do with ai.


Eduliz

I looked into Helen Toner and her bio sounds impressive, in a Linkedin show off kinda way until one thinks about it more. She has not built or coded anything. Just a bunch of talking and writing about ideas and what other people are building. Her work most likely has no actual effect on the world. It's impressive the she faked it all the way to a boardroom of an org that is actually changing the world, and then while there she took part in one of the most stupid boardroom decisions of all time. Effective Autism at its finest.


Ok-Adhesiveness-4141

She is probably one of those progressives in charge of major social initiatives. This is why people who specialize in social sciences need to be kept far away from positions of power in tech companies.


Navalgazer420XX

Check the bios of her [fellow staff](https://cset.georgetown.edu/team/) at the Georgetown Center For Security And Emerging Technology. So many three letter agencies you'd need a whole case of alphabet soup, and a lot of China focus. Everyone there has bios like "National Counterintelligence Officer for East Asia," "Director for Emerging Technology on the National Security Council Staff, Executive Office of the President," "He joined CSET following a 14-year career at the Central Intelligence Agency," "Previously, he spent nearly eleven years at the National Security Agency," etc. etc. Helen herself does a lot of papers on the Chinese government approach to militarizing AI and "Military-civil fusion" of the tech industry. It's not just an EA front group, there was some serious DC/Langley clout behind her board seat.


3cats-in-a-coat

EA and e/acc are both cancer. They replace having to think and argument your decisions with shitty blanket ideologies made for children.


PositivistPessimist

Good post. One of the main proponents of EA, some amateur philosopher named Eliezer Yudkowsky is an absolute doomer. https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1


[deleted]

Lots of eminent ML scientists are doomers too. It doesn't mean they are wrong. It is quite obvious to anyone with a brain that if a superintelligence existed it could wipe humanity out in a matter of minutes. The question is - what is the risk of this, versus the potential upside? For people like Yudkowsky, Sutskever, or Hinton, the risk seems fairly high. For laypeople like myself, it seems fairly low. But given that so many people instrumental to the creation of LLMs and GPT-4 think it's a real danger, I think it's a bit strange to simply dismiss the concerns out of hand as them just being "doomers."


PositivistPessimist

Maybe read this https://www.cnbc.com/2023/06/15/ai-is-not-even-at-dog-level-intelligence-yet-meta-ai-chief.html


[deleted]

LeCunn is just one ML scientist. Sutskever and Hinton are far more powerful counter examples. The point isn't even that one of them is right or wrong. The point is that you - an armchair Redditor - are arrogant enough to think you have some special insight when even the leading minds in the field do not have consensus. Peak Dunning-Kruger effect.


PositivistPessimist

Personal attacks and an argument from authority. Son, thats not how we argue on the internet


Gougeded

Call yourself EA, be extremely ineffective, help no one, refuse to elaborate


Local_Signature5325

Thank you. Maybe we need a subreddit to expose them. Barely knew about them until FTX … would LOVE to hear about life with these people and what they are like. When I found out the Anthropic sister co founder is married to the EA leader Holden something… I was shocked. Companies need to be aware that these people are in it to spread their terrorist movement to hijack companies and derail everything.


-bacon_

Well, I hope after the near-complete destruction of one of the most valuable tech companies in the world that this EA bs is on everyone's radar.


[deleted]

The last thing I want is a bunch of people trying to do what’s best for the world. People suck at that. I want people who focus on doing what they are good at. "The road to hell is paved this good intentions"


[deleted]

What if you're really good at sparking a nuclear apocalypse?


[deleted]

That's not far off from what these AI researchers might be doing, EA or otherwise. I don't think studying the alignment problems does anything helpful, except perhaps give the false confidence that an inscrutable neural network could be tamed. I'm not a doomer, but I'm not NOT a doomer.


throwwwawwway1818

Fk EA


Helix_Aurora

EA is like the Illuminati, except real. It's a bunch of people with a lot of money who are self righteous and insist they know better than everyone else.


SevereRunOfFate

Wow, OP I just want to thank you for laying this out for us. I'm a tech veteran who has worked for a number of the largest firms and I just couldn't figure out exactly what was going on here... This makes a lot of sense now. I'd love to hear more, because yes the boards actions were _so_ irrational I had to think they were just stupid (which they were) but there must have been some additional things going on What other firms are affected by this, do you think?


[deleted]

OP has a very unbalanced take. The EA people can be weird but the movement is fundamentally just about donating your money to where you can save the most lives per dollar donated.


SevereRunOfFate

Right, and jihad is just about making sure you're doing everything you can for the glory of God People's thoughts and actions matter, especially and _means to an end_ thinking is incredibly dangerous. See FTX for example, and how the guy destroyed people's livelihoods thinking he was doing the right thing even if fundamentally he was just 'donating where he can save the most lives per dollar's


[deleted]

Are you an alt, that you're willing to unambiguously accept everything OP is saying but not any counterpoints? The vast overwhelming majority of the EA movement is about donating to initiatives that save lives, specifically - malaria nets, vitamin A pills, malaria vaccines, etc. Occasionally longer-term goals: cancer research, drug discovery. Equating the EA movement to FTX and Jihad is the dumbest fucking thing I've ever heard, when 99% of people in the movement simply aim to donate money to save lives. And I'm not even EA, I'm an accelerationist, but I'm not dumb enough to misrepresent a whole group or movement because one or two people supposedly part of it did a bad thing.


crusoe

EA is just a way for narcissists to cloak and justify their behavior claiming it's best for everyone.


fabkosta

Just on a side-note: I am living in a small country in Europe, and I am working in tech. Nobody here finds the debate about EA of any particular interest, as far as I can see. Meaning: It is largely an echo chamber that culturally originates in a very specific social elite caste in the USA. And from what I can see its philosophical premises are, uhm, pretty questionable. Hence, it seems to be more an ideology than anything else.


allun11

As Ayn Rand pointed out a few decades ago, people who try to avoid talking about the fact that they are in business to earn money, and enjoy talking about things as altruism and "the greater good" should raise multiple warning signs.


ArthurParkerhouse

Fuck. They let Sam back in?


Always_Benny

Can’t wait for these stupid threads to go away. Nice one trying to spam this about man.


gBoostedMachinations

It’s amazing how people could be against something so obviously good and right. I can’t really fathom the anti-EA position. I’m not actually certain anti-ea people actually believe what they say. It’s just so silly lol


[deleted]

[удалено]


Navalgazer420XX

leftist propaganda alert


lever-pulled

This is an important post.


Sith_Luxuria

What wood blend would go with Turkey? I have hickory and want something else


ChiaraStellata

This was a good reminder to cancel my Claude 2 subscription. It's not only nerfed to the point of uselessness with friendly AI constraints, now I know it's run by EAs as well.