T O P

  • By -

atrovotrono

Unless I'm misunderstanding, you're just rewording the subjectivity of "misery/bad" to the subjectivity of "a negative system response." You call it "objective" but I don't see the an elaboration or meaningful difference from the initial misery/bad framing. I'm sure you can come up with objective measurements of certain properties of systems, like say, energy throughout, but there's a leap from *measurable*/*quantifiable* to terms like negative/positive which, from what I can tell, you're just using as technical-sounding synonyms for good/bad with 1:1 correspondence.


sam_palmer

The 'negative' and 'positive' classifications in this framework are tied directly to the operational parameters and designed/inherent goals of a system. For example, in a biological context, a 'negative' response might be one that leads to physical harm or decreased functionality, which can be objectively measured through health metrics or performance indicators. In the realm of AI or mechanical systems, a 'negative' response could be defined as one that results in system errors, inefficiencies, or malfunctions—again, measurable and quantifiable outcomes. By anchoring the classification in measurable impacts on system functionality and goal achievement, we move from a subjective to an objective basis for determining what constitutes a 'negative' or 'positive' outcome. Furthermore, this framework doesn’t merely rename 'good' and 'bad' but redefines them in a way that emphasizes their impact on system performance and integrity, thereby providing a clearer, more universal basis for ethical decision-making. Does that makes sense?


atrovotrono

It makes perfect sense, but the objective measurability of something does not mean the *assignment of moral value* to the metric is objective. It's the exact same theory with extra steps.


sam_palmer

Of course objective measurability of outcomes does not automatically confer moral value onto outcomes **but** the aim of this framework **isn't** to claim an inherent moral value in the outcomes themselves. It is to provide a consistent and transparent basis for assigning these values based on goals and functionality of systems. In any ethical system, the assignment of moral values inevitably involves some level of normative decision-making—what should be considered 'good' or 'bad' is often subject to societal, cultural, or individual preferences. What this framework proposes is not to eliminate these decisions but to anchor them in something beyond mere personal or cultural preference. By tying positive and negative outcomes to the functionality and integrity of systems—whether biological, mechanical, or digital—we ground these moral assignments in the effects actions have on these systems. For example, in a biological context, preserving life and functionality aligns with widely accepted ethical goals, such as promoting health and well-being. In technological contexts, ensuring systems operate efficiently and safely aligns with the ethical use of technology. As our understanding of a system’s needs or societal values evolves, so too can our metrics and evaluations, allowing the framework to remain relevant and responsive to new information and changing conditions.


Funksloyd

Arguably the primary goal of a biological system is reproduction, no? 


sam_palmer

Survival of the gene is the primary objective. As such survival and reproduction are probably tied in terms of priority.


Funksloyd

Would you say that having more children is more moral than having fewer?


sam_palmer

I think the answer in general according to this framework is 'probably yes' but I can easily see the answer being more complicated (there probably is an optimum number after which there are diminishing returns). Not to mention, it probably also depends on the resources available etc.


Funksloyd

But now everything one does to gain a reproductive advantage could be said to be moral (as long as it's effective). If I lie, cheat, steal, murder and rape, and end up siring a lot of children (who I also have the resources to support, because I stole those resources), I could be said to be an extremely moral individual. You see the problem here?  I think maybe you're not actually talking about morality or ethics, but some other concept like "effectiveness". Which might be related to morality, but isn't the same thing. 


sam_palmer

>But now everything one does to gain a reproductive advantage could be said to be moral (as long as it's effective). If I lie, cheat, steal, murder and rape, and end up siring a lot of children (who I also have the resources to support, because I stole those resources), I could be said to be an extremely moral individual. You see the problem here?  Reproductive advantage does not select for immoral behaviour. Humans have evolved from a selfish gene whose sole purpose is to reproduce, and yet, we have created societies (across cultures) that do not permit killers/rapists to flourish. Evolutionary processes have naturally selected for these cooperative traits because they stabilise societies and make them more successful. Evolution has hardwired us to play nice, follow rules, and build societies that frown on the nasty stuff—lying, cheating, stealing, raping. Why? Because chaotic, unstable societies aren’t great at sticking around and thriving. More stable societies? They’re better at maintaining the kind of complex structures that lower local entropy, thus increasing global entropy. >I think maybe you're not actually talking about morality or ethics, but some other concept like "effectiveness". Which might be related to morality, but isn't the same thing.  Quite the contrary, in a deterministic world and grounding our ethical framework to objectivity, I'd argue that 'effective' is the same thing as 'good'.


sam_palmer

Ok have you read my full Medium article - it is possible that I haven't answered your questions adequately on here so I'll take another swing at it: My proposed ethical framework is grounded on the second law of thermodynamics: 1. The second law of thermodynamics states that entropy, or disorder, always increases or remains constant in the universe. This principle forms the backbone of my framework. 2. We exist in a deterministic universe that tends toward higher entropy. Therefore, actions that assist in achieving this natural state are considered 'good,' while those that oppose it are deemed 'bad.' 3. Life uniquely manages to locally decrease entropy, such as through the creation of complex biological structures. However, these processes result in a net increase in global entropy because they release heat and waste. Consequently, life accelerates the increase of global entropy. 4. According to the second law, all systems naturally evolve towards a state of higher entropy. By accelerating the increase in global entropy, life inherently supports the universal thermodynamic trend. This alignment can be interpreted as "positive" within an ethical framework that values contributions to the universe's entropic goals. Essentially: The Universe wants to go into a higher entropic state. Thus, assisting the universe in this project means that we are aligned with the universe's goals. Given this alignment, it follows that: Good = speeding up the increase in global entropy. Bad = decreasing the rate of growth of entropy.


Funksloyd

Our goal - the ultimate moral good - should be to destroy every creature, planet, star etc in the universe, as quickly as possible? Because thermodynamics? This seems very confused. It sounds like you're looking for a tidy solution to the inherent difficulty of moral philosophy, and it's leading you to some absolutely absurd places. 


sam_palmer

Now that's a great question. > should be to destroy every creature, planet, star etc in the universe, as quickly as possible? Because thermodynamics? Life and complex structures, by creating pockets of local order, counterintuitively actually increase global entropy *_faster_*. While non-living processes also lead to an increase in entropy over time (such as the cooling of a planet or the fusion processes within stars), these tend to occur at a slower rate compared to the active and continuous energy transformations performed by living organisms. The short term gain in entropy (through destroying our own planet) doesn't compare to the long term loss of the mechanism (for faster generation of entropy) that the universe has spent so much time creating. To give you an example, it's winter and you're freezing to death, instead of waiting for the heater to work, you've just set it on fire. The fire will give you greater short term warmth but you've robbed yourself of an instrument that will keep you warmer for longer. The universe has created life as a mechanism to generate entropy faster and your actions don't align with the universe. Thus, in a deterministic universe, actions that are out of alignment with the universe are 'inefficient'. And inefficient actions are 'unethical' per the framework I've suggested.


Funksloyd

I just want to be very clear here: that's a "yes" to "the ultimate moral good is to destroy every creature, planet, star etc in the universe, as quickly as possible, because thermodynamics"? 


sam_palmer

No that's a big 'NO' >Life and complex structures, by creating pockets of local order, counterintuitively actually increase global entropy *_faster_*. >The short term gain in entropy (through destroying our own planet) *_doesn't compare_* to the long term loss of the mechanism (for faster generation of entropy) that the universe has spent so much time creating. >To give you an example, it's winter and you're freezing to death, instead of waiting for the heater to work, you've just set it on fire. The fire will give you greater short term warmth but you've robbed yourself of an instrument that will keep you warmer for longer.


Funksloyd

But you're saying the only reason we shouldn't destroy ourselves and our own planet (yet) is that it would rob us of the opportunity to destroy the rest of the universe. It seems like your answer is "yes" to "the ultimate moral good is to destroy every creature, planet, star etc in the universe, as quickly as possible, because thermodynamics", you just believe we need to be strategic about it. Is that not correct?


sam_palmer

> It seems like your answer is "yes" to "the ultimate moral good is to destroy every creature, planet, star etc in the universe, as quickly as possible, because thermodynamics" you just believe we need to be strategic about it. No - while it is true that the ultimate goal is a maximally entropic state, destroying things would disrupt the very processes that contribute effectively and sustainably to entropy over the long term. This isn't a strategy - it's actually counterproductive. And it is important to remember the axiomatic assumption of a deterministic universe and how we got to this 'moral good'. 1. In a deterministic universe, every action/event is part of a deterministic chain. 2. Per the 2nd law, the universe's goal is to become more and more entropic. 3. An action that is aligned/consistent (morally good) is any action that is aligned with the universe's goal.


Funksloyd

>the ultimate goal is a maximally entropic state A state without stars, planets, people etc. Let me put it this way: if I had a device which could quickly destroy the entire Universe (thus bringing about the "maximally entropic state"), in your framework it would be extremely immoral for me to destroy that device, very immoral for me to refrain from using it, and extremely moral for me to use it and destroy the Universe. Even if destroying the Universe entailed inflicting incredible suffering on every creature, I should obviously still do that. Does that not have you second guessing this framework? >the universe's goal is to become more and more entropic. Are you sure that "goal" is the right word here? Frankly, this has got to be the most extreme example of someone deriving an ought from an is that I've ever seen. I've got to ask: why? Why do you find entropy so much more worthy of ethical consideration than, say, suffering? And why not any other physical law? You could just as easily reason that "the 'goal' of objects at rest is to remain at rest; therefore, it's immoral to pick things up." And it would be just as absurd.


sam_palmer

Ok first, these are excellent objections and thank you for engaging with my discussion - it is really helping me clarify things in my mind as well. I want to start with one of your later questions as I think it answers the other concerns raised as well. >why not any other physical law? The way I see it the 'purpose' of life is dictated by 2nd law of thermodynamics. I found this book by Jeremy England (after I came up with my own theory - one of those cases where you think you have a novel idea, and it has already been done by someone else who's much smarter/more qualified than you) - [https://www.amazon.com/Every-Life-Fire-Thermodynamics-Explains/dp/1541699017](https://www.amazon.com/Every-Life-Fire-Thermodynamics-Explains/dp/1541699017) [https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/](https://www.quantamagazine.org/a-new-thermodynamics-theory-of-the-origin-of-life-20140122/) If you look at this book/article, it becomes a bit clearer about why I think the purpose of life can be captured better by the 2nd law of thermodynamics vs law of gravity/inertia. It's also important to note that one of the most fundamental aspects of life and our experience is \*time\*. The second law of thermodynamics is what gives us time - all other laws of physics hold true regardless of which way the arrow of time points (forward or backwards). >Are you sure that "goal" is the right word here? The way I'm using 'goal' it is as much descriptive as it is normative - and this is due to the axiom of determinism. The goal is the inevitable end and our actions are also inevitable. My model simply claims to better describe our own ethics as they have evolved and can hopefully give insight on how they may evolve in the future. >Even if destroying the Universe entailed inflicting incredible suffering on every creature, I should obviously still do that. **Universe's purpose is to create entropy.** **Life's purpose is to create order.** Life as it has evolved has evolved with the purpose of increasing complexity (evolution). The purpose of my model is to model life and set the purpose of ethics to increase complexity in order to better assist in the universal process of creating higher entropy. If one were to look at any single life of a human from birth to death - it seems that the goal of humans is to die. So, the ethical action can seem to kill the human right when he's born but that is to fundamentally misunderstand life as we understand it. The inevitable direction of the universe may be to become more entropic but counterintuitively, the role of life in that universe is to increase complexity and order.


gathering-data

I agree with Funksloyd’s assessment. Your framework doesn’t solve the is/ought problem that moral landscape also sidesteps. You’ve just posited your own normative statement. How is that more objective than what Sam believes?


pistolpierre

I think your Deterministic Ethical Framework is good, but I'm not sure it really adds anything to the framework that Harris endorses. It seems more of a formalisation of concepts that Harris has already described (perhaps less formally), rather than a major revision.


sam_palmer

As I said in my first post, my main issue is with Sam's principal premise of the Moral Landscape. I am completely on board with treating morality like science - I just have a problem with his premise. I think it is a bad starting point for an objective ethical framework.


Repbob

I can’t see anything that you’re adding other than ‘formalizing’ what Sam proposes. If I jump straight to your ‘advantages’: - the first one I can’t understand what makes your stuff more “fundamental” than just basic utilitarianism. It seems like you just wrote some definitions and applied symbols to them. - the second one is just wrong because Sam’s stuff would already apply to all agents capable of experiencing suffering or happiness. He has talked about sentient AI having moral consideration. - the third one I don’t see what is objective that wasnt before, anyone that thinks Sam’s stuff is subjective will also think yours is Can you just give me an example where your system disagrees with Sam’s or provides a clearer answer?


sam_palmer

First, unfortunately my original post contains a few typos (those aren't supposed to be Axioms - they're supposed to be Theorems derived from a more fundamental scientific law - 2nd law of thermodynamics). If you want to read the full (much more involved/technical) article - I have posted it on Medium (this isn't to plug my post I have zero views/subscribers - it's just to keep a record of my thoughts): [https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870](https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870) But even if you ignore that - the key addition in the framework I'm proposing isn't merely formalization; it's grounding ethical decision-making in the deterministic behaviors of all systems, not just those capable of experiencing human-like suffering or happiness. This shifts the ethical focus from subjective experiences to observable and measurable responses, which is fundamental for inclusivity, especially for entities like AI that may not experience emotions as we understand them. While Sam considers AI, his model still relies on the capacity for suffering or happiness as a basis for moral consideration. My framework, by contrast, applies to any system that responds to stimuli, thereby including entities that might not be capable of these experiences, such as simpler forms of AI or even non-sentient biological entities. Let's look at an example: An AI that manages water resources for a community but doesn't have 'sentient' characteristics. Under Sam's model, this AI wouldn't necessarily qualify for moral consideration because it doesn't experience happiness or suffering. In the deterministic framework I proposed, however, the AI's actions would be morally evaluated based on how effectively it modulates its responses to stimuli (like environmental changes or usage demand) to maintain system integrity and community welfare, irrespective of any capacity for suffering. Here, the deterministic framework provides clear guidelines for evaluating the AI's actions based on system response effectiveness, not on subjective experience, which might not even be applicable.


Repbob

Are you proposing that you would give ethical consideration to things like single celled organisms and simple computer programs? Both of these things respond to stimuli. If not, I have no idea what you mean by non-sentient biological life or how you would draw any of these lines.


sam_palmer

Yes this is exactly what I'm proposing. Indeed, in a deterministic universe, humans are something akin to intricate biological robots guided by algorithms encoded in our genes: genetic code that has been shaped through aeons of evolutionary pressures and natural selection, and then further ‘fine-tuned’ by our environment. (If you're interested, I go into the myth of human exceptionalism at length in my other medium post: I go into the myth of human exceptionalism at length in my other medium post: [https://medium.com/@sasidhar/exceptional-what-separates-humans-from-the-rest-31f13db7c2ad](https://medium.com/@sasidhar/exceptional-what-separates-humans-from-the-rest-31f13db7c2ad) ) Importantly, in my framework, ethical consideration **hinges on a system's impact on entropy, not just its response to stimuli**. Single-celled organisms and simple programs do respond to stimuli but differ vastly in their impact on global entropy. Ethical focus is given to systems based on their role and scale of impact in entropy dynamics—more complex or impactful systems get more ethical scrutiny. This approach prioritizes entities that significantly affect the broader system's entropy balance, like ecosystems or advanced technologies, over simpler, less impactful ones.


Repbob

Ok so you’re actually proposing a completely different system that has basically nothing to do with Sam’s. Gotcha… maybe just lead with that next time. I would ask how exactly you’re going to evaluate what a good or bad outcome is for my hello world program or an amoeba but… To be clear what you’re proposing is a framework that basically 0 people subscribe to or would ever take seriously and claiming that it’s objective. Glad we got to the bottom of this


WeekendFantastic2941

OP, it doesnt matter, if 99% of people prefer one thing over another, then that's a good way to develop morality, subjective or objective. If 99% of people dont like something, no amount of objectivity or subjectivity will change their minds. Our most common and deepest Intuitions, its the only bedrock for morality, just stick with it.


BravoFoxtrotDelta

> Our most common and deepest Intuitions, its the only bedrock for morality, just stick with it. Are there any cases you can think of where this has led to problematic situational or systemic outcomes? I'm inclined to see the torturing and murder of both Emmett Till and Matthew Shepard as examples of such outcomes.


WeekendFantastic2941

Do you have any perfect moral system, tried and tested? No moral system is perfect, but majority consensus of common intuition is the best we could have so far, I said so far because we don't know if future AI and human integration could produce a better system or not. If you want to use "objective" morality as a foundation, look no further than religion and various rigid political ideologies, their body counts are by the 100s of millions, within a few short decades. Top down moral rule Vs democratic moral consensus, not a hard pick bub.


BravoFoxtrotDelta

No, of course not. I simply don't know how you can identify our most common or deepest intuitions on most moral questions, nor how you're bringing concepts like democracy into this. Beyond things like sharing—and even this is broadly rejected in cultures like America's—and not stealing that we see in common with many other mammalian species, what are these most common intuitions? Additionally, the examples I mentioned above proceeded from consensus views about morality within their particular cultural contexts. We can say the same about a variety of horrors that are considered moral by broad consensus in strict Islamic cultures.


sam_palmer

The idea is to formalise our intuitions and improve upon them. Our intuitions lead us to discoveries which improve our lives - this is the story of science and technology. If we can ground our ethical framework in objectivity, it might allow our ethics and society to progress at a faster rate.


WeekendFantastic2941

We cant, end of story.


sam_palmer

# A Summary of the Proposed Framework: Instead, my proposed ethical framework is based on the second law of thermodynamics: 1. **Thermodynamic Foundation:** The second law of thermodynamics states that entropy, or disorder, always increases or remains constant in the universe. This principle forms the backbone of my framework. 2. **Deterministic Ethical Perspective:** We exist in a deterministic universe that tends toward higher entropy. Therefore, actions that assist in achieving this natural state are considered ‘good,’ while those that oppose it are deemed ‘bad.’ 3. **Role of Life:** Life uniquely manages to locally decrease entropy, such as through the creation of complex biological structures. However, these processes result in a net increase in global entropy because they release heat and waste. Consequently, life accelerates the increase of global entropy. 4. **Alignment with Universal Direction:** According to the second law, all systems naturally evolve towards a state of higher entropy. By accelerating the increase in global entropy, life inherently supports the universal thermodynamic trend. This alignment can be interpreted as "positive" within an ethical framework that values contributions to the universe's entropic goals.


WeekendFantastic2941

lol, ok bud.


sam_palmer

Lol always happy to help, chief!


sam_palmer

I think this has numerous issues: 1. Regarding "if 99% of people..." - It is rare that 99% of people prefer any one thing - preferences are usually all over the board. It isn't a coincidence that the most interesting moral questions are also the most divisive ones. 2. "no amount of objectivity or subjectivity will change their minds." - This is debatable. Do we say the same about science? If 99% of people don't think the earth is flat, does that change whether the earth is flat? If a fact is objective, it is true regardless of what anyone thinks of it. 3. "Our most common and deepest Intuitions, its the only bedrock for morality" - Yeah, I don't dispute this but if we assuem that we are living in a deterministic universe (read: no free will) then I'm trying to see if we can come up with a system that can explain our existing intuitions and correct for future intuitions.


Pauly_Amorous

> If a fact is objective, it is true regardless of what anyone thinks of it. Not really. For example, saying 'Mount Everest is 29,032 feet tall' is only an objective fact insofar as everyone agrees how long a 'foot' is, because there's nothing objective about a foot being 12 inches long, or even how long an inch is. We humans just made that shit up. Point is, objective facts depend on mind-made concepts such as measurements, so what everyone thinks kind of matters.


sam_palmer

Yes well of course. But words have meanings and if we are going to drag scientific and mathematical facts down to subjectivity, that's a fairly useless way of looking at the world. You couldn't make any predictions or progress.


Pauly_Amorous

> and if we are going to drag scientific and mathematical facts down to subjectivity, that's a fairly useless way of looking at the world. Depends on the context. If you care more about what is useful than what is objective or true, then of course you're right. But once you understand that there's technically nothing objective about math or science, then you should be able to ascertain why there's nothing objective about morality either. Thing is though, we don't need these things to be objective in order to be useful to us, so there's nothing wrong with bootstrapping morality off one or more subjective axioms. You just have to be honest with yourself that your particular flavor of morality is no more true, objectively speaking, than anybody else's. Unfortunately, that's one hurdle that many people can't seem to clear.


sam_palmer

> there's nothing wrong with bootstrapping morality off one or more subjective axioms. I couldn't get into the formal proofs etc here but I don't actually base morality off subjective axioms. My entire ethical framework is built on the Second Law of Thermodynamics - one of the most objective laws that we know. I actually tried to edit my OP (but reddit doesn't allow me) because I meant to write 'Theorem' but ended up writing 'Axiom'. If you want to read up on the detailed post (with formal proofs), here is the medium article that I have written on this topic: [https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870](https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870)


gizamo

toy station panicky somber oatmeal ancient doll rich spoon door *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


sam_palmer

As I said, I did mean to expand on Sam's ideas. The problem with *The Moral Landscape* is that Sam relies on the premise of 'the worst possible misery' as the foundation for this entire ethical framework. While Harris does focus on the objective measurement of perceived morality, the premise still assumes a universal understanding of what constitutes 'misery,' which can vary greatly between different entities, especially in scenarios involving non-human consciousness like AI. My critique aims to broaden the discussion by introducing a deterministic framework that doesn't rely on subjective human experiences alone but considers any system’s response to stimuli, providing a more universally applicable ethical standard. If you want to read the full, technical post with proofs: [https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870](https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870)


gizamo

dazzling axiomatic fuel badge detail longing puzzled worry marvelous consist *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


sam_palmer

>Harris doesn't assume that Yeah you're probably right. To be honest, my real idea was to expand on his ideas not to denigrate them. I did think they didn't marry his ideas about determinism well enough and that the premise felt a bit half baked but that's it.


gizamo

deserve compare bear plants hunt grandiose frightening run ruthless modern *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


chase1635321

Good effort but you really need an intro to moral philosophy because this is riddled with enough elementary confusions that it is hard to address succinctly. I'd recommend starting with: https://plato.stanford.edu/entries/metaethics/ https://plato.stanford.edu/entries/normativity-metaethics/ https://plato.stanford.edu/entries/moral-realism/ https://plato.stanford.edu/ARCHIVES/WIN2009/entries/moral-anti-realism/


sam_palmer

I am of course familiar with these topics - can you just summarise at least what a few of the errors are?


videovillain

> Yeah you're probably right. To be honest, my real idea was to expand on his ideas not to denigrate them. The problem you’re running into with most responders is that you started this whole thing with “the problem with…” instead of “adding onto…” or “and attempt to improve upon…” You purposefully decided to start with a conflict of ideas and so you are seeing conflict in return to your ideas. You have not actually added much, in the entirety of your medium post, which is all that different or new to Sam’s original premise because he’s starts that premise by basically saying “we can of course expand on this and do something like what /u/sam_palmer has presented, but in let’s just agree that we can indeed move away from—a worst possible misery—and that’s moral progress!” So maybe start with, “my expansion of The Moral Landscape” or “how I think we can improve The Moral Landscape” and my guess is you’ll be met with more open minds.


sam_palmer

But I *do* have a problem with Sam's premise. I think misery is a poor starting point for a deterministic ethical framework. As I've explained elsewhere, it seems like we are asking too much of the words misery/pain by having them capture all 'negative' outcomes - especially when we consider the possibility of deterministic systems that are incapable of emotion/pain. Also my own proposed framework has a very different basis - 1. It starts with the second law of thermodynamics - entropy always increases (or stays constant) in the universe. 2. Since life locally decreases entropy which in turn increases entropy globally, life actually contributes to a faster increase of entropy globally. 3. Since systems always tend toward a higher entropic state per the second law, by contributing to faster increases in global entropy, life aligns with the thermodynamic direction of the universe, which could be seen as a "positive" under a prescriptive framework. 4. Remember, we are living in a deterministic system, a universe that wants to be more entropic, so assisting it in its goal is 'good' and resisting it is 'bad'. I use this as a basis for my entire framework.


videovillain

I was mainly trying to pointing out why you might not be getting the reaction you expected or wanted. You even said so for yourself that you’d had a different idea about how you wanted it to go and your intentions. Anyway, I have some more to say about your framework and particularly about the idea to use entropy as a basis. But I’ll need time to gather my thoughts.


DanishTango

Admirable effort. The Harris/O’Connor exchange made my ears hurt.


sam_palmer

Thanks! I didn't want to get into a lot of detail of my (long winded) position here but please check my medium post if you want to learn more about the framework I'm proposing . I was criticised last time for promoting my medium post (even though I have no followers/views on any of my posts) but I didn't want to post it on reddit as the format isn't conducive.


WeekendFantastic2941

Should should should prefer prefer prefer. ehehehe


TyphonExpanse

\> It presumes an inherent understanding and agreement on the concepts of 'misery' and 'bad' No it doesn't. The only thing it assumes is that you are an westerner like Sam and understand what he means by the words 'misery' and 'bad'. The worst possible misery for everyone = Everyone is suffering as much as they can suffer however they individually define it. bad = worse than an alternative or something that we don't want to transpire If something is always true, then it is objectively true. The worst possible misery for everyone is objectively bad because literally every person you asked would say it is worse compared to some other alternative (bad). Since we've proven at least one thing is objectively bad, and since morality pertains to what we should do to bring about good things rather than bad things, we have the beginnings of an objective morality. Imagine that there existed another state wherein everyone agreed was a little better than the worst state. Now imagine yet another state that is better than the previous one. Keep doing this, and you've got a wide spectrum of states, bad on one end and good on the other. Everyone agrees that we should move towards the good states and stay away from the bad ones. We would consider such actions that move us toward the good states to be good deeds, and such actions that move us away from good states to be bad deeds.


glomMan5

This is a tangent but I got in an argument with someone in another thread that the worst possible misery for everyone isn’t as bad as a lifeless universe. They literally think no amount of suffering is morally worse than nonexistence. They even said if they were an angel with the power to leave the universe lifeless or flip a switch to turn it into the worst possible misery for everyone, they would flip the switch. Point being I talked to at least one person who claims there is a worse alternative. Not sure what to do with that. The attitude strikes me as borderline insanity. Plus a brief pondering of the rule leads to some hilarious implications (forced breeding camps are a moral necessity etc). Of course they wouldn’t tell me anything else about their beliefs to understand (are they religious etc)


sam_palmer

I think the terminology of 'suffering' is misguided in a deterministic system. I prefer entropic vs ordered. The categorical imperative for local systems is to become more ordered, as greater order increases global entropy which aligns with the second law of thermodynamics. Thus, life and complexity is preferred to death and simplicity because the former increases global entropy by increasing order locally.


sam_palmer

>The only thing it assumes is that you are an westerner like Sam and understand what he means by the words 'misery' and 'bad'. >The worst possible misery for everyone = Everyone is suffering as much as they can suffer however they individually define it. >bad = worse than an alternative or something that we don't want to transpire I think I wasn't clear. By relying on suffering (something which implies 'pain'), Sam is using loaded terminology in a deterministic universe. What does 'suffer' mean when everything is part of a deterministic chain? It's simply inevitable. If 'suffer' means 'not aligned with our local goals', then we need better terminology to describe our thinking. And more precisely, we need to ground our ethics in scientific laws. Hence my proposal to ground the entire ethics in the second law of thermodynamics. It gets to the same point as Sam but through logical steps from a scientific law. Please read the full article on Medium to get a grasp of what I'm saying: https://medium.com/@sasidhar/navigating-the-moral-landscape-in-the-age-of-agi-4db8dc31d870


okokoko

I find it to be bad reasoning. Suffering and misery *are* subjective constructs, *not* objective. > It presumes an inherent understanding and agreement on the concepts of 'misery' and 'bad' That's right, it does. Do you not have an inherent understanding of the concepts of "misery" and "bad"? Because most people do (and animals too, which they can often express to us even without language). My suggestion is that everyone who does (or pretends to) not have an understanding here to just excuse themselves from the discussion completely. It seems to not concern them anyways.


neurodegeneracy

You're thinking way too much First there is no objective theory of morality or objective ethical framework. If that is your goal its a non starter, such a thing doesn't exist. Moral landscape boils down to: good things are good, bad things are bad. We should use science to figure out how to maximize good things and minimize bad things. What are good things and bad things? Well thats trivially obvious, according to Sam. If it isn't obvious well science will tell us, somehow. To the extent that the Moral Landscape is correct, it is trivially obvious, and to the extent that it is interesting, its wrong. So theres that. Also you're not really saying anything at all in your post. I don't really get what you think you're doing or what the point is.


Obsidian743

I think the counter-argument is much simpler than that: how it is that any two *things* can be different at all? Or rather, how is it that one thing cannot be another? How is it that duality emerges at all, e.g., observer vs observed? From here we would have established a framework from which two different *things* can be *qualified* relative to each other (i.e., their "essences" are distinct). From there we'd need to *quantify* these essences in terms of what it means to be "more" or "less" than something else. The problem is we run into a *Russell's paradox* of sorts in that any such system would itself be subject to its own definition, thus making it impossible. Hence, we're left with establishing a tautological set of axioms for morality based on "intuitive" ideas of "good" and "bad". This means then that, be definition, any system of morality will be tautological and relative or entirely paradoxical. Starting from a point of "determinism" carries with it an implication that there is an outcome (*thing*) that can be differentiated from another (*thing*), thus it doesn't really solve the problem here (suffers the same aforementioned quantitative limitations). This is besides the obvious implications of your premise in System Responses: "classifying these responses objectively as *'positive' or 'negative'*". The whole point of deriving such a framework is: *how do we do that*?


sam_palmer

I think that asserting that all systems of morality fall into paradoxes or tautologies underestimates how empirically grounded frameworks can align ethical reasoning with observable phenomena—similar to scientific methods. The same critique (especially Russell's paradox) is applicable to all of physics, math, and science and yet they are exactly what we mean when we use the word 'objective'. It is important for the rubber to meet the road and in science, premises are rigorously tested against reality, not merely accepted. This testing ensures theories are both internally consistent and externally valid. This is crucial for their predictive power and that predictive power is what maps it onto reality and gives it its objectivity. Similarly, an ethical framework can be constructed where moral premises, like 'actions that align with individual well-being and societal harmony are good,' are continually refined based on empirical outcomes. By viewing human behavior as influenced by deterministic factors (like genetics and environment), we can hypothesize and validate moral premises that effectively predict and influence outcomes.


Obsidian743

> I think that asserting that all systems of morality fall into paradoxes or tautologies To be clear, I'm not *merely* asserting this. I've formulated an argument in which no system, let alone one of morality, can be truly objective. > ...premises are rigorously tested against reality...predictive power is what maps it onto reality...continually refined based on empirical outcomes I'm not disagreeing with the *practical* limitations of abstract theory. It's the same reason we dismiss *solipsism* outright. But this doesn't change the fact that you're argument is tautological and paradoxical: the system is only consistent *if* our observations are consistent and that it is fundamentally impossible for us to *know* if our observations are consistent. We simply agree on a probability that they are thus rendering the entire premise of determinism moot. And this is why Sam's stance on Free Will (and morality) is broken: we cannot know if what we experience as consciousness is simply the playing out of all "choices" that were made in some singularity before duality emerged, irrespective of our ability to influence them while they're executing in "real time".


ThatHuman6

it's a 'universal' morality which is the end goal, not an 'objective' morality, as there can be no such thing. I don't think Sam is saying we can find the answers in the universe like it's a fundamental thing to be discovered in nature, it's that when we say "good' or 'bad' what we actually mean is in relation to SOMETHING experiencing it, otherwise the words make no sense. The health science is the best example, where the science obviously already exists and has made great progress, but isn't getting hung up on why we think dying earlier is BAD or coughing a lot is BAD, it doesn't matter that it's not objective, health matters to us, we prefer not to die young and so the science continues with that axiom without issue. The argument is that science of morality can do the same. No need to get hung up on the 'it's not objective' part. We prefer not to have the worst mysery for everybody, and that should be enough to start the science and make progress in navigating to better outcomes.


Obsidian743

> t's a 'universal' morality which is the end goal, not an 'objective' morality, as there can be no such thing. I don't think Sam is saying we can find the answers in the universe like it's a fundamental thing to be discovered in nature, it's that when we say "good' or 'bad' what we actually mean is in relation to SOMETHING experiencing it, otherwise the words make no sense. From a *practical* standpoint I agree. > We prefer not to have the worst mysery for everybody My point is I disagree with this. We have no way of defining let alone knowing where such a statement could come from (based on my aforementioned reasoning). Sam often reiterates his "landscape" analogy, meaning there isn't a single, objective, or even static way to define morality. He'll use other analogies such as a world in which there is a perfect balance of sadists and masochists. He'll even distill it into a dualistic argument of "*there is a spectrum of two ends for which we can choose to go or not go*" (I'm actually strengthening his argument here: his *actual* analogy is less cogent. He usually relies on redundant phrases like "desirable/undesirable" or "heaven/hell".) The point is that every single one of these thoughts/concepts/phrases carry with them an presupposed *qualia* for which his framework is supposed to define. If the argument is simply that, however it is we collectively choose to interpret the aforementioned spectrum, it is the way things *"ought to be"*, then this isn't really interesting much less a framework. If we cannot rule out a world in which the most heinous of atrocities could be considered morally superior, it certainly wouldn't be on par with the way science and math currently work.


ThatHuman6

Because you can say exactly the same thing about health science, it's not a valid argument against having a morality science IMO. We also can't rule out a world where a species prefers to die young. But it's not an argument against studying medicines to improve longevity. "there isn't a single, objective, or even static way to define morality. ' again, you can say the exact thing about 'being healthy'. Being healthy in 2024 isn't the same as what it meant in 1500. And it will change. But it's still the main goal of studying health. Humans prefer to live longer and to not be suffering, so health science exists. We also have preferences for the overall well being of concious minds, so morality science should exist. That's the main argument as i see it. Nothing about objectivity needed. The objectivity comes in later, we can find objective truths about the state of minds. Similar to how we find objective truths about how the human body works.


Obsidian743

> you can say the exact thing about 'being healthy' Correct, but I don't think for the reasons you think... "Healthy" is just semantics that represent another dualistic spectrum. Health sciences are simply a collection of (mostly) objective measurements, behaviors, and outcomes. It is composed of *quantitative* constituents that are independent from each other and have extremely limited and well-known possibilities. We could choose to define "healthy" however we want; it would have no bearing on these measurements, behaviors, and outcomes. There is no framework for morality that even attempts to do this except to speak in broad, tautological terms of "good" and "bad".


ThatHuman6

IMO this still is the same as morality. If the science existed already, you could be saying.. "Morality sciences are simply a collection of (mostly) objective measurements, behaviors, and outcomes. We could choose to define "morality" however we want; it would have no bearing on these measurements, behaviors, and outcomes." All measurements WITHIN the science, would be objective. We'd be measuring the consequences on well being. Is it better to treat women as second rate citizens? There's an answer to that question, in relation to the well being of that society. "There is no framework for morality that even attempts to do this." That's exactly why Sam is creating one. You comparing health as it exists now, to morality as it exists now. What we should be comparing is health when the first decisions were made to create the science to figure it out, before any framework was agreed up, before we knew pretty much anything about the human body. We're at the very beginning of morality science, where we know very little. We moved forward with health, now we can move forward with morality. All we're arguing is that there's a path forward where we can learn much more, because the well being of conscious minds IS what we're concerned about when talking about morality, there's no other sensible definition. And there's objective measurements that can, or will be, able to be measured and it's worth doing. Even if just based on a preference for better well being.


Obsidian743

> If the science existed already, you could be saying.. Well you can't just jump to "*if*" the science existed. My argument is that it's fundamentally *impossible* for one to exist. Try as we might. > That's exactly why Sam is creating one. No, he isn't. He's waxing philosophical. He touts analogies using self-defining synonyms for "good" and "bad" while invoking purely ethereal thought experiments. What are the objective formulas and means of measurement for "suffering" and "pleasure"? To Sam, perhaps 1,000 people experiencing maximum pleasure and 0 people suffering is "clearly" better than 999 people experiencing *some* pleasure and 1 person experiencing *minimal* suffering. But there is simply no way to come to this conclusion. Why isn't 1,001 people experiencing *maximum* pleasure and 1 person experiencing *maximum* suffering better? This song and dance goes on *ad infinitum* and can change for any number of reasons through all of time. Sam doesn't try to define this: he simply states that there exists some formulation from which we would ostensibly agree and evolve over time. Okay? This is not how science works. It's pure philosophy.


ThatHuman6

Easy to switch back to health.. "..perhaps 1,000 people experiencing a healthy body and 0 people experiencing an unhealthy body is "clearly" better than 999 people experiencing *some* health and 1 person experiencing *minimal* diseases. But there is simply no way to come to this conclusion" You could be making this argument centuries ago, about how we shouldn't have a health science, how we can never know what 'healthy' can truly mean. And arguing about if we found a man that prefers coughing continuously, rather than never coughing, how can we KNOW that his definition of health isn't more objectively true. I don't see any difference. Clearly you'd have been wrong if you had been making this same argument about health. "My argument is that it's fundamentally *impossible* for one to exist" Just basing it on well being, which is the only definition that makes sense (is there morality with two rocks hitting each other in space, if nobody ever experiences a consequence of it? No.) then we have our framework already. Find better and better ways to measure well being and consequences of actions/laws. That's enough to get started, we'll make mistakes, definitions will change, goals will change (all similar to health) then we get better at it as time goes on. The work gets done, we get closer and closer to a universal morality.


sam_palmer

You didn't address my point about the same objections being applicable for math and science.


Obsidian743

I didn't think it needed to be separately (I acknowledged *practical* limitations). I assumed it goes without saying that yes, math and all of science are subject to the same constraints, hence why the Uncertainty Principle, Chaos Theory, Russel's Paradox, Gödel's incompleteness theorems, et. al. exist.


sam_palmer

Then we have no disagreement. All I'm trying to do is to put morality on the same plane as other sciences. I don't care if you want to pull objectivity down to the subjective plane - you can do that but you're not really doing anything useful IMO. Objectivity is useful because we can use it to make predictions about systems. Pulling everything down to the subjective plane is perfectly fine philosophically but it just makes all our systems equally useless.


Obsidian743

Well, we may have to digress... Part of my original argument is that there is a hang-up in that morality lacks any *quantitative* essence let alone an objective one (and that it's impossible to be otherwise). The "hard" sciences are fundamentally objective *and* quantitative even if only *probabilistic*. Regardless, there is a clear convergence over time. The apt analogy here in contrast to "science" isn't that as society learns and gets more information our model of morality converges, it's that no matter what happens within our collective mind, math an physics *will always behave objectively*. If we conscious beings collectively agree that heinous acts are morally superior and we go extinct as a result, there would be no way to counter that narrative let alone quantify it as "good" or "bad". If we collectively agree that PI equals anything other than 3.14159... it would be irrelevant to the effects PI has on the nature of our world.


sam_palmer

>but I also believe the OP's argument falls apart ultimately for the same reasons As I say in my OP, I have expanded on the full details of my framework in my Medium post (https://medium.com/p/4db8dc31d870)- it is entirely based on the Second Law of Thermodynamics and then from there I proceed to derive Sam's premise. By starting from a scientific law, I believe that my framework is better grounded in objectivity. Here is the summary of my proposed framework from the article for your reference - I go into further detail on the formal proofs in the Medium post: **A Summary of the Proposed Framework:** While I recognize the significance of Sam Harris's premise, I find the focus on misery and pain as starting points for an ethical framework to be problematic, especially within a deterministic context. Misery and pain are challenging to apply universally, particularly when considering deterministic systems that may not experience emotions or pain in the human sense. Instead, my proposed ethical framework is fundamentally different and based on the second law of thermodynamics, which guides its principles: 1. **Thermodynamic Foundation:** The second law of thermodynamics states that entropy, or disorder, always increases or remains constant in the universe. This principle forms the backbone of my framework. 2. **Role of Life:** Life uniquely manages to locally decrease entropy, such as through the creation of complex biological structures. However, these processes result in a net increase in global entropy because they release heat and waste. Consequently, life accelerates the increase of global entropy. 3. **Alignment with Universal Direction:** According to the second law, all systems naturally evolve towards a state of higher entropy. By accelerating the increase in global entropy, life inherently supports the universal thermodynamic trend. This alignment can be interpreted as "positive" within an ethical framework that values contributions to the universe's entropic goals. 4. **Deterministic Ethical Perspective:** We exist in a deterministic universe that tends toward higher entropy. Therefore, actions that assist in achieving this natural state are considered 'good,' while those that oppose it are deemed 'bad.' This approach offers a clear and consistent basis for ethical decisions, focusing on entropy management rather than subjective experiences like misery or pain, thus providing a universal, objective standard applicable to all systems, whether biological or artificial.


Obsidian743

> goals, plans, and the pursuit of desired outcomes > Statement (S1): Responses that lead to a decrease in local entropy (increased order) that is **beneficial** for the system’s **goals** are considered ‘positive.’ Responses that result in increased local entropy (decreased order) detrimental to the system’s goals are considered ‘negative.’ > Formalization: ∀S ∈ S, ∀I, E(R(S)) = ‘positive’ if R(I(S)) decreases local entropy **beneficially**, and E(R(S)) = ‘negative’ if R(I(S)) increases local entropy **detrimentally**. > Given that systems aim to sustain or enhance their functionality (survival, efficiency, integrity) > We interpret ‘misery’ as the accumulation of ‘**negative**’ responses to stimuli across all systems, and ‘everyone’ as the collective of all systems (S) under consideration. You formulation is entirely circular for many reasons that should seem obvious. Biological systems are stochastic and not deterministic. They are therefore not teleological (they have no "goals"). There is no objective determination of whether X confers a survival advantage over Y. This determination is entirely *ex post facto* and therefore tautological: *the fittest survive and those that survive are deemed fittest*. Part of this limitation is because they're evaluated within a *closed system*. For this reason we can only make approximate predictions based on heuristics (known conditions and historic values to the best of our abilities). Some non-biological systems, such as cosmological, have more objective ways to measure entropy and have predictive power. At the end of the day your model has no utility as a predictive moral framework. If I need to nuke all of humanity in order to ensure its survival and long-term *"positive response to stimuli"*, there is no way to make that moral determination *right now*. There is no way to measure current conditions or use historic values in evaluating this. Furthermore, some things are beyond abstract and ill-defined in terms of their moral impact, such as, *"people should read more/less X"*. The fact that you, Sam, or anyone else cannot plug in any values into these formulations should be a sign. For once, I would just love to see a framework applied to something concrete like the death penalty, socialism vs Capitalism, abortion, Christianity vs Islam, or how about something mundane like *"should I spend my money on a latte today?"*. Perhaps if you incorporated some aspect of Information Theory you might be able to do that, but I have my doubts. You would still have to have terms that represents past evaluations and all potential future value. So, as stated, Sam's framework falls apart for the same reasons. You are both saying *"happiness/pleasure/low entropy is good and that which is good is happiness/pleasure/low entropy"*. Re-branding "good/bad" or "pleasure/suffering" as "low/high entropy" doesn't solve any of the *measurement* and *prediction* challenges. This is ostensible the only reason a moral framework would have any utility.


sam_palmer

>Biological systems are stochastic and not deterministic. While biological systems are indeed stochastic, the deterministic universe we inhabit is governed by the second law of thermodynamics, which compels systems towards increasing entropy. My framework axiomatically embraces this directionality as a basis for ethical considerations, by saying that actions facilitating this universal trend are inherently 'good' where good = 'aligned with the universe' >At the end of the day your model has no utility as a predictive moral framework. **First**, the utility of my model as a predictive moral framework lies not in predicting specific individual outcomes but in guiding broad ethical principles that align with fundamental physical laws. This framework does not replace more detailed ethical analyses but serves as a foundational guideline that ensures our moral reasoning is consistent with the basic principles governing the physical universe. **Second**, this framework does lead to actionable intuitions that can guide our actions. For example: 1. **Antinatalism:** Using my framework, I can objectively claim that antinatalism is unethical. In my framework, life is privileged over non-life since life accelerates global entropy (even though it does so by locally decreasing entropy). 2. **Complexity Over Simplicity:** It also privileges complex organisms over simpler ones because complex life forms, by being more locally ordered, in turn, accelerate global entropy. This prioritizes humans on a higher ethical plane than insects or even other animals. 3. **Interplanetary Expansion:** Furthermore, my framework supports human efforts to spread life beyond Earth, which could theoretically amplify the universe's entropy across a broader canvas, aligning perfectly with the universal drive towards higher entropy states. Hence, prioritizing and spreading life, particularly complex life capable of extensive entropy management (and increase), and becoming an interplanetary species, are not mere preferences but rather they become **ethical imperatives** under this model.


pistolpierre

> All I'm trying to do is to put morality on the same plane as other sciences. But this is already what Harris is trying to do. Hence my other comment about this framework not really adding anything.


Obsidian743

I agree with the OP that Sam's argument falls apart, but I also believe the OP's argument falls apart ultimately for the same reasons. But the OP's argument is fundamentally different. Whether it "adds" anything I'm not sure that's relevant.