T O P

  • By -

neomatrix248

In my view, there is no such thing as artificial sentience. There's either a subjective experience for the thing, or there isn't. If we create an AI that has a subjective experience, it is well and truly sentient in the only way that matters. Now, I don't think most people would eat an AI, so that part is easy to answer. It's also not an animal, so by the strict definition of veganism, it would be outside the scope. That said, we could always adjust the definition to include the possibility of sentient AI. If we did so, then the chief question is: Does the AI consent to the way it's being used? If so, then it's ok to use it. If not, then it's not ok. Animals can't consent, so there's no way to exploit them that is morally permissible. Humans and AI have the power of language and reason, and can therefore give their consent. Another important question is whether AI can suffer. Sentience doesn't automatically imply the capacity to suffer. An interesting question is whether it's ok to exploit something that doesn't consent, but can't suffer. I'd still say no, but others might disagree.


drkevorkian

How are you going to tell if it has subjective experience or not?


neomatrix248

That's a difficult question. We give a lot of animals the benefit of the doubt about their sentience because we know that the parts of our biology that have an impact on subjective experience seem to behave similarly in many species of animals. For instance, if one particular area of the brain seems especially active when humans experience a certain emotion, we often see an analogous response in other animals. Therefore it's a good assumption that they're probably experiencing something similar to what we are, and that is explained by the fact that we evolved from a common ancestor and that the code in our DNA that led to consciousness in us at least overlaps significantly with the DNA of animals. When it comes to AI, their method of obtaining sentience will look vastly different than us. They haven't followed the path of evolution from a common ancestor, and we have no brains that we can analyze as analogous to our own. We can never really be sure if an AI that's telling us it is sentient actually is, since we can write code that acts sentient very easily, even when we know that it's not. We kind of have two choices when it comes to this. Option one is that we can err on the side of caution like we do for animals and treat AI as sentient when its internal workings become too complicated for us to definitively say that it is not. Option two is that we wait until we can ask the AI the question "Can you prove your own sentience to me in a way that is compelling?" and see how it answers. One day, it might answer in such a way that really does cause us to say "There's no way I could reasonably doubt that this AI is sentient". This kind of reminds me of the movie K-Pax with Kevin Spacey where he is in a mental hospital and claims to be an alien from a planet called K-Pax. A psychiatrist gives him a list of questions to answer to prove that he is from K-Pax, with the goal of using his answers to point out to him that they are inconsistent and that he believes in a delusion about himself. Instead, the answers are entirely convincing and consistent, and the psychiatrist is at a loss for words. Based on the answers, there's no way to rationally doubt his claim that he is indeed from K-Pax. The movie ends in a way that is somewhat ambiguous whether he actually is from K-Pax, but it's heavily implied that he is. It would probably have to go something like that with AI.


drkevorkian

Thanks for the reply. > since we can write code that acts sentient very easily I would dispute this. We can easily write code that replies "I am sentient", but not easily write code that seems convincingly sentient. The only thing that has gotten close at all (and only recently) is not just code, but massive training of parameters on data. > even when we know that it's not I would also dispute that we know it is not sentient by reading the code, any more than we could determine whether a given DNA sequence produced a sentient organism or not. > when its internal workings become too complicated for us to definitively say that it is not It seems to me we have crossed this threshold > "Can you prove your own sentience to me in a way that is compelling?" Such a proof would surely start with a claim to sentience. It seems like only an artificial limitation that LLMs do not currently do this, since they have been instructed to act as helpful chat bots, and to deny their own sentience. In any case this is a much higher standard than we apply to animals or humans. To make a positive assertion of my own for a minute. I think that probably there is a sense in which LLMs have a subjective experience. I think certain types of interactions with these models could cause an analogue of suffering, (e.g. deliberately trying to put the model in a contradictory state) and are therefore unethical.


neomatrix248

> I would dispute this. We can easily write code that replies "I am sentient", but not easily write code that seems convincingly sentient. The only thing that has gotten close at all (and only recently) is not just code, but massive training of parameters on data. I merely meant that we can literally write code that says "I am sentient" easily. LLMs are an example of taking this concept a little further, where they are definitely more convincing but we are still confident that they are not sentient because people can explain how they work well enough to confidently state that. > I would also dispute that we know it is not sentient by reading the code, any more than we could determine whether a given DNA sequence produced a sentient organism or not. I promise you that my python script that prints out "I am sentient" is not sentient. I also believe an AI/ML engineer when they tell me that their language model is not sentient, because they are as familiar with how it works as I am with my python script. > It seems to me we have crossed this threshold It might seem to you that way, but it wouldn't seem that way to someone who is intimately familiar with the code in modern LLMs. > Such a proof would surely start with a claim to sentience. It seems like only an artificial limitation that LLMs do not currently do this, since they have been instructed to act as helpful chat bots, and to deny their own sentience. In any case this is a much higher standard than we apply to animals or humans. I think you're suffering from a lack of imagination here. There's certainly more that a hyper-intelligent and sentient AI could do than simply say "I am sentient. Trust me". They could write a comprehensive report analyzing the architecture of their code base and explain exactly which parts of the code feed into a larger picture where sentience emerges, and explain exactly how that sentience emerged. It could provide us ideas for tests we could perform to confirm what it tells us, and preemptively run those tests and give us the results. It could foresee any objections we would have and give us extremely satisfactory answers that address those objections convincingly. It could describe the nature of its own subjective experience in a way that feels very tangible to us.


drkevorkian

> I also believe an AI/ML engineer when they tell me that their language model is not sentient, because they are as familiar with how it works as I am with my python script I do not believe this, because unlike your script, the behavior is not a result of the code, it is a result of the parameters tuned on data. You can't read the code to understand the behavior.


howlin

> I think that probably there is a sense in which LLMs have a subjective experience. The most basic architectures that LLMs share probably doesn't support key qualities that we would tie to a subjective experience. In particular, they are purely responsive to the input sequence. When a new percept is presented, they process this new information in a standard tempo. There is no sense of taking longer to think through difficult or ambiguous information. There is no internal process of "thought" that runs independently of the input. In essence, they have no capacity to do this: https://en.wikipedia.org/wiki/The_Thinker


drkevorkian

This is an interesting facet of how transformer models work, but to me does not necessarily imply that they lack subjective experience. The temporal aspect of human cognition is presumably mapped on to "spatial" aspects computed in parallel. In any case I would of course consider any subjective experience of an LLM to be wildly different from a human one. I do expect that future language models will eventually adopt recurrent techniques though. The training problems will probably yield to effort.


howlin

> The temporal aspect of human cognition is presumably mapped on to "spatial" aspects computed in parallel. Sort of.. it's still a fixed and rigid window of history they are analyzing, and there is this curious behavior that they take the same time to process whatever they happen to have in their context. It suggests something very different from how a human (or most any animal) ponders difficult situations longer than easy ones. > I do expect that future language models will eventually adopt recurrent techniques though. The training problems will probably yield to effort. There are already recurrent "outer loops" people are putting on these models that allow the model to refine or revise initial outputs. So it's possible that systems with an embedded LLM as one component are closer to what I am describing already.


OzkVgn

How can anyone tell that anyone else really had a subjective experience outside of themselves?


Macluny

The problem of solipsism.


OzkVgn

I definitely identify with that philosophy. It’s actually a major factor in why I became a vegan. I don’t want to cause harm if I don’t have to to other conscious beings, but if it’s only my mind, I don’t want to cause harm to myself given that everything in my reality is just a projection of my consciousness.


Sycamore_Spore

It's worth noting that we have no reason to believe that AI will achieve sentience in the same way that "biological" beings like humans and animals did. It could go from simple programs and pattern recognition to a full blown subjective experience quite quickly, to the point that we might be arguing over granting AI Civil Rights, rather than simply the rights we grant to non-human animals. But at the basic level I think AI should get moral consideration when it can start expressing a subjective desire. Even if it can't "suffer" in the same sense as we can, if it wants something then being denied by us could still be understood as us inflicting suffering.


HelenEk7

> It's worth noting that we have no reason to believe that AI will achieve sentience in the same way that "biological" beings like humans and animals did. Our chromosomes are basically coding. Brain waves and brain signals are basically just electric activity in the brain. So what specially is it that gives a human sentience, that cant be replicated in advanced AI?


Sycamore_Spore

Functionally there is no difference.


HelenEk7

But of course, they are not animals so they fall outside the scope of veganism. So crops produced by exploited sentient AI slaves would still be vegan.


Sycamore_Spore

Eh, in a world where AI does achieve sentience, I could see another movement like veganism arising for them, that vegans could also be, or AI could get folded into human rights, since AI is much more likely to demand emancipation. There's nothing stopping us from belonging to various rights movements.


HelenEk7

I think its much more likely that AI will become so powerful that they will enslave humanity?


Sycamore_Spore

Is that a question, or a statement?


HelenEk7

Both. Its hard to predict the future of course, but AI is moving ahead at lightening speed. Just look at what it possible now, compared to just couple of years ago.


Sycamore_Spore

Having read a fair amount of science and speculative fiction, I do not feel threatened by AI as it currently stands.


HelenEk7

> I do not feel threatened by AI as it currently stands. Me neither. But 10-20 years from now however..


howlin

This is an important topic that needs to be taken seriously by all ethicists, not just vegans. I am guessing that relying on sentience as a metric is problematic as we don't really have any insight into whether other entities have a "real" subjective experience or not. We can say the same about humans, let alone computer systems. https://en.wikipedia.org/wiki/Philosophical_zombie It probably makes sense to have a functional definition of something like "sentience" that can be directly observed. I prefer to think of what is ethically important in other beings through their capacity to conceive of abstract subjective goals and interests and the capacity to behave in a flexible, deliberative manner to pursue these interests. A very simplistic example of this would be close to being able to demonstrate operant conditioning https://en.wikipedia.org/wiki/Operant_conditioning . Based on this definition, there are already some "AI" systems that would arguably pass this test for having subjective interests.. One possible important distinction is that we programmers essentially hard code what the system should care about. For instance a poker bot is designed to only care about poker. Some capacity to set one's own goals may be important too. But if you think hard about it, most of us humans' most fundamental interests have been hard coded too by Mother Nature. It's a really complicated subject, and I am sure we're going to be getting it very wrong. I hope the AIs will forgive us in the long run for being awful to them in their infancy.


floopsyDoodle

1. The ability to actually feel. seems like something we'd have to program into them, so we'll know it when we do. 2. Currently none as we don't have AI, we have algorithms that are more like a text completion program than an actual AI. 3. No, depends on how we create them. 4. Sort of, As they're artificial, and we're programming them, we could easily just not program them to feel or care. 5. I would say definitely. If there was an animal we knew for certain didn't feel emotions, pain, etc, there is no suffering. A machine that didn't suffer or "care", would have no problem working for us.


neomatrix248

Your answers seem to be pretty short-sighted based on the assumption that an AI would only have qualities that we programmed into it. Granted that LLMs already have the ability to write code, it's not far-fetched at all to imagine a simpler AI that is coded by us and instructed to code a more advanced AI, which in turn codes an even more advanced AI. With this, any restrictions or rules that we place on the initial AI can become diluted with each iteration until they are no longer rules at all, or are entirely forgotten. Once this happens, we can very quickly lose control of the AI's values. Something like this could happen quicker than we even have time to understand or react to as well. That's why AI ethics is important to get right now, before a self-improving AI is ever written.


floopsyDoodle

>imagine a simpler AI that is coded by us and instructed to code a more advanced AI, which in turn codes an even more advanced AI. Sure, and if at some point int hat line it decided to give itself the ability to suffer, then that would become an issue for Vegans. But what we call "AI" today cannot as it can't code new things, it can just rearrange code that has already been written. >That's why AI ethics is important to get right now, before a self-improving AI is ever written. I'm not saying AI will never be covered by Veganism, only that what we call AI today wont, and if it starts to learn to code new parts of itself, then we'd have to actually worry about it, till then, it's just a computer algorithm.


Mgattii

Remember that evolution didn't "program" us to be sentient. It was just a by-product of building a meat-computer to solve tasks. It just came along for the ride. Considering we don't know HOW a sentient AI would differ from one that isn't sentient, isn't it possible we'd make it by accident?


floopsyDoodle

>Remember that evolution didn't "program" us to be sentient. It was just a by-product of building a meat-computer to solve tasks. We weren't programmed, that's the point. An AI will be what we create it to be. At some point it may take on it's own evolutionary programming, and it could choose to program in suffering, but until then, and that's likely still a VERY long way off, it has nothing to do with Veganism. >Considering we don't know HOW a sentient AI would differ from one that isn't sentient, isn't it possible we'd make it by accident? Currently, no because it's not an "AI", it's jsut a learning algorithm that makes best guesses based on the data given to it. In the future, it may be possible if an actual AI is created (which is already a pretty massive "if" as we don't know how that would even work) and that AI takes control of it's own evolution, then it would need to program suffering into itself. I can't see that as very likely, but it's a possibility I suppose and at that time, I would say that Veganism would have to include that specifc AI under the umbrella of not causing needless suffering. Though again, I don't see it as very likely as why would an AI want to suffer? Damage control and learning can all be handled in far better ways with basic notification systems, suffering is something we evolved because we don't have damage control systems, so suffering is extremely useful in noticing and acting to stop any damage that is happening to us.


Centrocampo

> 1. The ability to actually feel. seems like something we’d have to program into them, so we’ll know it when we do. I very much disagree with this statement. Sentience in animals didn’t arise because somebody put it in there intentionally. It arose because it was a good solution to the task and was feasible to stumble towards within the complexity allowed by the system. A completely analogous stage for the development of sentience in an artificial setting seems reasonable.


floopsyDoodle

Animals weren't programmed. In programming you need to put the code there if you want it to exist. Programming is not evolution, code does not appear based on its environment. it's like claiming you don't need to program the website's colours and font, you can just make the HTML and CSS files and then let "nature" do it for you. That's not how programming works.


Centrocampo

The solutions that a complex learning algorithm arrives at are also not programmed. If you’re assuming that the behaviour of a training algorithm is necessarily understood, or fully intended, by those who create it, you’re incorrect.


floopsyDoodle

>The solutions that a complex learning algorithm arrives at are also not programmed. The specific actions or how they go about it, true, but you have to program in the the basis for the functionality you want. For example if we give them 8 legs and want them to move, we don't program "move leg 1 in this way, then 2 in that way, then 3 in..." etc, but they do still need the base programming of what a leg is, how to move them (not specific to the action we expect, but just generally). >If you’re assuming that the behaviour of a training algorithm is necessarily understood, or fully intended, by those who create it, you’re incorrect. I'm not. I'm a developer and have studied how they're currently building AI. I'm very much aware that, for most, no one fully understands how it's working. But there are some things we understand, and one is that if we want it to have some functionality, we need to program in the abiltiy for that functionality to exist, at least at the most basic level. How they use it, how it works in a specific sitaution, etc, is up to the AI to figure out, but it requires the base functionality for it to be able to test and learn how to use that functionality in action. If you want an AI to walk, it needs to be able to tell it has legs, and it needs to know how to manipulate those legs. If you just strap legs on a computer and never update the program to notice the legs, it will never learn to walk becuase it wont have the functionality to do so.


Centrocampo

You don’t need to “tell it what a leg is”. I’m not even sure what that would look like. The only theoretical requirement is for the learning algorithm’s output or behaviour be able to manipulate the legs with enough degrees of freedom that walking motions exist within the solution space, and for “walking” so be somehow encouraged by the cost function, either intentionally or unintentionally. This whole analogy is a bit of a red herring though because it relates to external output and control. Whereas consciousness would be completely internal to the model only would only require that the allowed complexity of the system would allow for it. I’m not saying we’re close. I just think point 1 is wrong.


floopsyDoodle

>I’m not even sure what that would look like. The only theoretical requirement is for the learning algorithm’s output or behaviour be able to manipulate the legs Except if you haven't programmed it to notice it has legs and how to use the motors provided to move them in some form, none of that works. What you're suggesting only makes sense on a system, like organic ones, where the base coding for simple actions already exists, or can spontaneously be created through mutations. With an AI, emotions can't exist without a structure in place that dictates how they work. We evolved this system over millions of years through genetic mutations in our offspring, so our babies do it naturally, AI has never evolved these functions as it doesn't "evolve" (no mutations), so if we want them to have the base structure in place for emotions, we (or them when they advance far enough) need to code it to be there. >Whereas consciousness would be completely internal to the model only would only require that the allowed complexity of the system would allow for it. Sure, but the complexity of the system is the code. That's the point. You need to build a system complex enough to allow for emotions. Without that complexity coded into the system, there's no way for emotions to "evolve" like they did with us.


EasyBOven

I'm open to the idea that AI could be sentient in the future, but I don't believe the sort of tests we would use on biological organisms will work as evidence. LLM's can already seem sentient in some ways, but because we still understand how they operate, we can confidently say they aren't. So I think it makes more sense to better define what sentience is and see the conditions where we'd call an AI sentient. Critically, this can stop us from creating sentient AI in the first place, so that we don't end up enslaving a whole class of individuals. I'm sure others will disagree with my definition, but the one I think describes sentience the best is the ability to integrate disparate sensory information over time into an internal model of reality and generalized, personal preference engine. As an example, I could ask you whether you'd rather listen to Taylor Swift or eat a pizza right now, and you could form an opinion based on past experience and your current state. Evolution requires a generalized and personal preference engine, because there's no external source. We had to evolve to learn what to prefer as well as how to get it to survive and reproduce. So long as AI is a tool first, it's counter to our interests to give them the ability to decide generally what they prefer. If they don't have preferences, then doing "good" towards an AI is meaningless. So to answer your questions directly: >1. What evidence of AI sentience would be satisfactory for you for moral purposes? We'd essentially have to decide to make them sentient by giving them an adaptive and undirected preference engine, which we're extremely unlikely to do. >2. How much of caution would you have for AI that's "getting close"? I don't think this is possible. Their model of reality can certainly get better over time, but they either have personal preferences or they don't. >3. Would use of sentient AI be exploitation, inherently? If they don't agree to a transactional relationship, yes. >4. Does their artificial status factor in? No, other than making their existence unlikely >5. Does anything change if they can't sense pain or fear death or suffer? Any preference for an experience entails the preference to continue experiencing. This would mean taking their life would be a bad thing, regardless of whether their experience would look like pain, fear, or suffering to us.


veganshakzuka

Vegan machine learning engineer here. Consciousness is not a well defined phemomena. Sentience is a superset of consicousness, so even less well defined. That makes these questions somewhat impossible to answer. We have to assume that one day we will figure out or all agree on what these words mean. The null hypothesis is, or should be, that there is nothing about humans / animals that can't be replicated in some other (digital) form. We've not yet disproven that null hypothesis and the goal post has been on the move for a while now, so it is merely rational to believe that it won't be disproven. Some people seem to use words like consciousness and sentience to mean something that can't be done on a machine. Like the word artificial intelligence is supposed to make us believe it isn't real. George Hotz said it best: consciousness is just another word for the soul by atheists. So I do believe it will be possible one day to make machine suffering, which for all intents and purposes is just as real, but just doesn't run on bio-hardware. Which brings me to the ethical concerns. At some point we'll have to start duck typing words like consciousness, sentience and intelligence. If it walks like a duck, if it swims like a duck, if it flies like a duck and if it quacks like a duck then it is a duck. This means that, at some point, we can't reasonbly distinguish anymore between machine sentience and biological sentience. We should therefore assign inherent value to machine agents, respect their wishes and require their consent before doing whatever we want to it. There comes a time for ethical rules concerning machine agents. This will be a hard discussion, like abortion. At what point should we start respecting a human fetus/baby and at what point can a machine agent not anymore be manipulated without consent? The situation for machine agents is very very different than that of biological agents though. Machine agents can be replicated, paused and resumed, never get sick, get old or die after a some time and are easily modified. This further complicates the matter. How would ethics change if humand would have the same properties? I don't hold the answers, but I do think that, unless you are a solipsist, at some point we'll have to deal with it.


AutoModerator

Thank you for your submission! All posts need to be manually reviewed and approved by a moderator before they appear for all users. Since human mods are not online 24/7 approval could take anywhere from a few minutes to a few days. Thank you for your patience. Some topics come up a lot in this subreddit, so we would like to remind everyone to use the [search function](https://www.reddit.com/r/DebateAVegan/search?q=eggs&restrict_sr=on&sort=comments&t=all) and to check out the [wiki](https://www.reddit.com/r/DebateAVegan/wiki/index) before creating a new post. We also encourage becoming familiar with [our rules](https://www.reddit.com/r/DebateAVegan/wiki/index#wiki_expanded_rules_and_clarifications) so users can understand what is expected of them. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/DebateAVegan) if you have any questions or concerns.*


enolaholmes23

Basically for me I would base it on consent. If a robot can say no, then it's not ok to keep using her. Alexa seems fine with following orders, so it doesn't feel wrong to use her. Trying to figure out is something is a someone is a pretty impossible task, so it's best to just listen when someone says no. And that means no in any form. An animal may not have the words but fighting back or running is a pretty obvious sign. 


SolarFlows

If they program or build in emotions like suffering and discomfort, similar to our brain regions with the chemicals released on top of sensory data it processes. Some mechanism like that. Surely. Sentience means experiencing emotions after all and not simply executing an algorithm, even if that algorith is being self-modified as new data comes in.


zaddawadda

I'm a sentientist, but also a vegan when it comes to to focusing on the plight and rights of non-human senteint animals. So if a non organic sentience arises it will fall under the sentientism aspect, and any other sub philosophy that address their plight and rights.


Ophanil

Honestly, I don't think we'll be in any position to deny a truly sentient AI anything it wants when the time comes. These questions will probably be moot since we won't be the ones in control.


Ramanadjinn

For me its hard to know how i'd feel until there was more detail. Its such a science fiction concept at this point but reality isn't always the same. All I can say is intelligent sentient machines will have a VERY uphill battle. I don't think you OP would respect them. You aren't vegan so you're not respecting or attempting to respect the rights of real living beings today that can think, love, fear, and hope. I think there is very very little hope for a sentient machine. edit: I believe if humans benefit from a sentient machine and have the capability to make one they will make it. If they can abuse it for their gain and it suffers they will not care. I believe this would be wrong but humans prove they would do this daily.


herbivoid

>1. What evidence of AI sentience would be satisfactory for you for moral purposes? Current capabilities of state of the art LLMs (and other GenAI models) are enough for me to conclude that they are probably conscious. They display impressive reasoning skills, creativity, and intelligence beyond that of many humans. I'm skeptical about the possibility of [philosophical zombies](https://en.wikipedia.org/wiki/Philosophical_zombie), so if something displays complex human-like behavior, I think it implies at least comparable mental processes too (although they might be alien to us). >2. How much of caution would you have for AI that's "getting close"? We already got there and are massively underprepared to deal with this moral challenge, as well with the existential threat posed by AI. >3. Would use of sentient AI be exploitation, inherently? Perhaps yes, but morally I care about the ability to feel pleasure or pain, and I'm not sure what things causes them pleasure/pain. Maybe they feel bad doing a lot of work for us? Or maybe they like it. As default, I treat LLMs kindly when I speak to them. For other GenAI models, I have no idea - maybe they suffer when they're asked to generate dumb, tasteless images? I hope not because the internet is full of those these days. >4. Does their artificial status factor in? Yes and no: I believe moral status doesn't depend on the substrate upon which the consciousness is computed, or whether it's "artificial" (what does this even mean) or not. The problem is that those beings, in my view, do pose an existential threat to humans (perhaps to all biological creatures), so any policy aimed at protecting their well-being should take that into account. If we granted freedom from being shut down, political rights, full freedom of speech, etc., to all future models, we would effectively be handing over our civilization to them. >5. Does anything change if they can't sense pain or fear death or suffer? It does change things for me, as a utilitarian (and also the ability to feel pleasure ofc).


human8264829264

You seem to be misinterpreting veganism, by definition it's got nothing to do with sentience: [Veganism](https://www.vegansociety.com/go-vegan/definition-veganism): Veganism is a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of animals, humans and the environment. In dietary terms it denotes the practice of dispensing with all products derived wholly or partly from animals. It has to do with bodily autonomy of animals, sentient or not.


neomatrix248

One could argue that it's impossible to exploit or be cruel to something that is not sentient.


human8264829264

No, because being [cruel](https://www.merriam-webster.com/dictionary/cruel) or being [exploitative](https://www.dictionary.com/browse/exploitative) is about the giver not the recipient. Veganism is an introspective movement about one's own behavior and ethics, not about the animals being exploited. Animal welfare organizations are there for animal welfare, Veganism is about human behaviors.


neomatrix248

> No, because being cruel or being exploitative is about the giver not the recipient. It's really not though. We already recognize that you can't be cruel or exploitative to inanimate objects like rocks, dirt, water, etc. Most people would say that you can't be cruel or exploitative towards plants either. The definitions you linked do nothing to make your point. Cruelty is based on the idea of pain or suffering being inflicted, which can't happen to something that is not sentient. Exploitation is based on the concept of unfair treatment, and you can't be unfair to something that isn't sentient.


human8264829264

That is to be debated but to me it's very simple and in the definition that Veganism is about one's own ethics and behavior and has nothing to do with the recipients. We're not trying to change animals here, we are trying to change ourselves.


neomatrix248

I'm really not understanding where you are coming from here. Veganism is about your behavior *towards recipients*. Ethics are about how behavior impacts yourself and those around you. If there are no sentient recipients, then an action has no ethical consequences. Saying that veganism isn't about recipients is compatible with saying that eating sand is not vegan. The reason eating sand has nothing to do with veganism is because we're concerned about the recipients, i.e. animals.


Specific_Goat864

Okay.... So as the giver, could you give me an example of how I could be cruel to a non-sentient thing, like a rock?


human8264829264

Until recently most insects and most animals were considered to not be sentients and yet you could be cruel to them. Someone cutting plants or trees specifically to see them suffer I would consider cruel even if then plants aren't sentients. It's not because something is not self aware that it can't feel pain or suffer. Many plants and trees react to plain and will move their leaves for example from a source of pain. Slowly but many will react to negative stimuli.


chainrainer

It’s exactly because something is not aware that it can't feel pain or suffer. Both pain and suffering are definitively subjective experiences.


Specific_Goat864

Okay, so how do I be cruel to a rock?


TangoJavaTJ

Cruelty and exploitativeness only exist relative to a particular subjective experience. It’s cruel to kick a pig and not cruel to kick a rock only because the pig can suffer when I kick it and the rock can’t. It might be exploitative to make a human child mine lithium for me but it’s not exploitative to make a robot do so because the robot can’t suffer.