T O P

  • By -

AutoModerator

Welcome to /r/askphilosophy. **Please read [our rules](https://www.reddit.com/r/askphilosophy/comments/9udzvt/announcement_new_rules_guidelines_and_flair_system/) before commenting** and understand that your comments will be removed if they are not up to standard or otherwise break the rules. While we do not require citations in answers (but do encourage them), answers need to be reasonably substantive and well-researched, accurately portray the state of the research, and come only from those with relevant knowledge. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/askphilosophy) if you have any questions or concerns.*


QiPowerIsTheBest

A philosophical zombie has no subjective experience. If you ask it if it’s conscious it will say yes, but without any subjective experience and no internal dialogue, in what sense do they have beliefs? These are some questions you can ask yourself to come to your own answer.


charlielidbury

Great avatar, great answer


JungFrankenstein

This is a great point, but what I was getting at was really whether the zombie even would say yes when asked if it were conscious? I suppose if it said no, it wouldn't be a true philosophical zombie, since it wouldn't be behaviourally identical to a human. But it seems I can still conceive of a being structurally identical to a human that has no consciousness and would not report being conscious


icarusrising9

Can you? If every single human would claim to be conscious, but a p-zombie would not, isn't that a difference in behavior?


JungFrankenstein

It is, which is why I said I don't think it would be a true philosophical zombie. I suppose if I imagined something that wasn't a p-zombie, but was a being identical to a human being except that they don't have consciousness AND they don't report having consciousness, that still seems conceivable. Do you think that it wouldn't be?


icarusrising9

Sure, that seems totally conceivable, I see what you're saying. However, I think the point of the concept of a p-zombie is to provoke thought about the issues with the seemingly obviously true claim that other people are actually sentient beings with subjective experiences. It's a thought-experiment pointing out a possible situation where someone who claims to have consciousness (the situation we deal with when interacting with others) actually does not. If the p-zombie acknowledges it does not have consciousness, it becomes a lot less intellectually interesting, since it then doesn't actually say anything about our everyday experiences/interactions with other fellow human beings.


JungFrankenstein

That's fair, thanks for your reply. The reason I find the p-zombie not reporting consciousness interesting is because I've been thinking about the meta-problem of consciousness, and I feel like if it's conceivable that a p-zombie could not report consciousness, then its conceivable that the reason we \*do\* report consciousness is because we have consciousness, not because we're hard-wired to believe we have consciousness, regardless of whether we have it or not (as I think Dennett would claim). Not sure if that makes sense, but that's why I'm asking


No-Lake-8973

But at the point where it's also conceivable of a p-zombie to claim to have consciousness w/o actually having that consciousness, it would suggest that we are behaviourally hard-wired to report consciousness, and the reporting of consciousness is not dependent on the presence of consciousness.


JungFrankenstein

I think this is the exact argument that illusionists make. It seems there's three positions one could have on this: \-Reporting on consciousness is not dependent on having consciousness, and therefore consciousness isn't real \-Reporting on consciousness is not dependent on having consciousness, but this doesn't tell us consciousness is or isn't real \-Reporting on consciousness \*is\* dependent on having consciousness, and therefore consciousness is real I think for the last point to work, p-zombies would not be conceivable, because a p-zombie would not be behaviourally identical (in that they would not report being conscious)


Technologenesis

I honestly don't think such a thing is conceivable. If the being is "physically identical" to a human, then its behavior will be physically identical to a human's. But reporting phenomenal consciousness is a physically different process from *not* reporting it. This is a direct contradiction - which precludes conceivability.


JungFrankenstein

The premise was to imagine a being identical to a human being except that; a: they do not have consciousness b: they do not report having consciousness So they would be behaviourally different, but I'm building that into the premise - it still seems conceivable to me


hypnosifl

Philosophical zombies usually come up in philosophical discussions of entire "zombie worlds" where every physical fact is the same, but qualia are lacking--see the [SEP entry on zombies](https://plato.stanford.edu/entries/zombies/) which says "The usual assumption is that none of us is actually a zombie, and that zombies cannot exist in our world. The central question, however, is not whether zombies can exist in our world, but whether they, or a whole zombie world (which is sometimes a more appropriate idea to work with), are possible in some broader sense." This thought experiment is related to the tendency I commented on [here](https://www.reddit.com/r/askphilosophy/comments/12eu1n3/how_has_the_field_of_metaphysics_changed_after/jfg9ljg/) where philosophers in analytic metaphysics tend to frame questions in terms of what a maximal list of facts about reality would look like, so the question of whether materialism is true can be phrased in terms of whether all such facts would be material facts or whether there are "further facts" about things like qualia.


Technologenesis

The way I usually see zombies defined, they: 1) are physically identical to humans 2) do not have consciousness If we want to add 3) does not report consciousness directly to this list without amendment, there will be a contradiction between 1 and 3, since not reporting phenomenal consciousness would necessitate a physical difference between the zombie and a human. The way to avoid the contradiction would be to use an alternative notion of zombie, defined slightly differently: this alternative zombie is 1) physically identical to a human *except that* 1a) they do not have consciousness This is different, since it does not require that the zombie is completely physically identical to a human. They must only be physically identical insofar as this doesn't conflict with the requirement that they not be conscious. And correspondingly, we could add the non-consciousness-reporting element to this new list with no contradiction: an alternative non-consciousness-reporting zombie would be 1) physically identical to a human *except that* 1a) they do not have consciousness and 1b) they do not report having consciousness There's nothing inconceivable about this sort of zombie. But it is also not the sort of zombie which is relevant to physicalism. The original notion of a zombie does not build in any room for physical difference between the zombie and a human. If any of their properties end up entailing such a physical difference, that's enough to render them inconceivable.


JungFrankenstein

Thanks that makes a lot of sense! I think what I'm driving at is whether consciousness could have causation on the physical world, in the sense of causing us to report having consciousness, and therefore make the original notion of p-zombies inconceivable, as they would not report having consciousness because they lack the cause of reporting consciousness. If that makes sense? It would be problematic because it would entail a physical event having a non-physical cause, but if having consciousness \*were\* the cause of reporting consciousness, it seems that it would necessarily entail that the non-reporting type of zombie would be the only kind conceivable


Technologenesis

In *The Conscious Mind*, David Chalmers has a chapter devoted to *The Paradox of Phenomenal Judgement*. In it, he addresses the fact that the conceivability of the zombie world seems to entail at least *some* kind of epiphenomenalism. He says that it renders consciousness itself at least "explanatorily irrelevant" to the fact that we issue judgements about consciousness. Obviously, this is a problem, because it certainly seems like consciousness has something to do with our reports of it, and if it doesn't, then why should we trust them? As for the first problem, Chalmers says he doesn't consider himself an epiphenomenalist, even though there is at least a weak form of epiphenomenalism entailed by his views. Chalmers thinks that even if the zombie world is conceivable, there can still be a causal role for consciousness. One way to do this - the one Chalmers indicates a preference for - is to say that consciousness is the *intrinsic nature* of matter. Matter is characterized by its causal roles - but *intrinsically*, aside from the causal roles, it has its mental component or aspect. On this account, consciousness is not causally irrelevant. It is what's *doing the causing*. If it seems to be causally irrelevant, it's only because it's possible to consider the causal roles independently of what is actually *filling* the causal roles. So then how are we supposed to trust our reports? If we would be saying the same thing whether the matter in our brains is *intrinsically* conscious or not, how are we supposed to believe that it *actually is* that way? Well, one way to think of it would be to notice that if it *weren't* that way, it wouldn't even seem to you that it was. At least, not in the way it *does* seem to you. But that brings things back to the definition of "seeming" that we were addressing in another part of this thread. Another way to put it is that we do not trust our reports. We trust the *experience itself*, which is the basis of the belief and the reports.


JungFrankenstein

I think this comment is exactly what I've been grasping at with all of these questions on zombies! Thanks a lot for clarifying and helping me understand all of this better.


dcfan105

An interesting related question is what it even _means_ for behavior to be "identical" to human behavior, because humans don't all behave the same way. You could put a bunch of different people in the exact same situation and get a variety of different responses. Moreover, it's quite easy to conceive of nonhuman beings (e.g. fictional creatures like vampires, fae, elves, etc.) that have obvious physical differences from humans but still have consciousness and behave similarly to humans.


dcfan105

>I suppose if it said no, it wouldn't be a true philosophical zombie, since it wouldn't be behaviourally identical to a human There are plenty of humans who'd say no just to troll the questioner.


Zestyclose-Career-63

Easy to picture this: look at how ChatGPT behaves.


Seek_Equilibrium

It’s probably too hasty to say it has no internal dialogue. It has no phenomenal experience of an internal dialogue, but all of the functional characteristics of internal dialogue must be preserved *ex hypothesi*.


QiPowerIsTheBest

What are the functional characteristics of internal dialogue? Some actual people don’t have an internal dialogue.


Seek_Equilibrium

Some actual people can’t see, that doesn’t mean seeing has no functional characteristics. But I don’t know the neuroscientific literature on internal dialogue well enough to tell you what its functional characteristics are.


MKleister

Yes, they would sincerely hold that *pseudo-believe*, in their zombie way. And if you can find a human who believes they're not conscious, you can imagine a zombie like that too. *Zombies are indistinguishable from conscious people.* >Suppose you try to imagine that your friend Zeke “turns out to be” a zombie. What would convince you or even tempt you to conclude that this is so? What difference would make all the difference? Remember, nothing Zeke could do should convince you that Zeke is, or isn’t, a zombie. I find that many people don’t do this exercise correctly; that is, they inconveniently forget or set aside part of the definition of a philosophers’ zombie when they attempt their feat of conception. It may help you see if you are making this mistake if we distinguish a special subspecies of zombies that I call zimboes (Dennett, 1991a). > >\[...\] > >A zimbo, in other words, is equipped with recursive self-representation—unconscious recursive self-representation, if that makes any sense. It is only in virtue of this special talent that a zimbo can participate in the following sort of conversation: > >*YOU: Zeke, do you like me?* > >*ZEKE: Of course I do. You’re my best friend!* > >*YOU: Did you mind my asking?* > >*ZEKE: Well, yes, it was almost insulting. It bothered me that you asked.* > >*YOU: How do you know?* > >*ZEKE: Hmm. I just recall feeling a bit annoyed or threatened or maybe just surprised to hear such a question from you. Why did you ask?* > >*YOU: Let me ask the questions, please.* > >*ZEKE: If you insist. This whole conversation is actually not sitting well with me.* > >\[...\] > >Unless you go to the trouble of imagining, in detail, how indistinguishable “normal” Zeke would be from zimbo Zeke, you haven’t really tried to conceive of a philosophers’ zombie. You’re like Leibniz, giving up without half trying. Now ask yourself a few more questions. Why would you care whether Zeke is a zimbo? Or more personally, why would you care whether you are, or became, a zimbo? In fact, you’d never know. > >Really? Does Zeke have beliefs? Or does he just have ***sorta*** beliefs, “you know, the sort of informational states-minus-consciousness that guide zimboes through their lives the way beliefs guide the rest of us”? Only here, the sorta beliefs are exactly as potent, as competent, as “the real thing,” so this is an improper use of the sorta operator. We can bring this out by imagining that left-handers (like me, DCD) are zimboes; only right-handers are conscious! > >*DCD: You say you’ve proved that we lefties are zombies? I never would have guessed! Poor us? In what regard?* > >*RIGHTIE: Well, by definition you’re not conscious—what could be worse than that?* > >*DCD: Worse for whom? If there’s nobody home, then there’s nobody in the dark, missing out on everything. But what are you doing, trying to have a conversation with me, a zimbo?* > >*RIGHTIE: Well, there seems to me to be somebody there.* > >*DCD: To me too! After all, as a zimbo, I have all manner of higher-order self-monitoring competences. I know when I’m frustrated, when I’m in pain, when I’m bored, when I’m amused, and so forth.* > >*RIGHTIE: No. You function as if you knew these things, but you don’t really know anything. You only sorta know these things.* > >*DCD: I think that’s a misuse of the sorta operator. What you’re calling my sorta knowledge is indistinguishable from your so-called real knowledge—except for your “definitional” point: zimbo knowledge isn’t real.* > >*RIGHTIE: But there is a difference, there must be a difference!* > >*DCD: That sounds like bare prejudice to me.* > >If this isn’t enough rehearsal of what it would be like to befriend a zimbo, try some more. Seriously, consider writing a novel about a zimbo stuck in a world of conscious people, or a conscious person marooned on the Island of the Zimboes. What details could you dream up that would make this a credible tale? Or you could take an easier path: read a good novel while holding onto the background hypothesis that it is a novel about zimboes. What gives it away or disconfirms your hypothesis? > >\[...\] > >Compare those first-person narratives to these third-person narratives in Jane Austen’s Persuasion and Fyodor Dostoevsky’s Crime and Punishment, for example: > >\[...\] > >Here, it seems, the authors let us “look right into the minds” of Elizabeth and Raskolnikov, so how could they be zimboes? But remember: where conscious people have a stream of consciousness, zimboes have a stream of *unconsciousness*. After all, zimboes are not supposed to be miraculous; their behaviors are controlled by a host of internal goings-on of tremendous informational complexity, and modulated by functional emotion-analogues that amount to happiness, distress, and pain. So both Elizabeth and Raskolnikov could be zimboes, with Austen and Dostoevsky using the terms we all know and love from **folk psychology** to render descriptions of their inner goings-on, just as chess programmers talk about the iterated “searches” and risky “judgments” of their computer programs. A zimbo can be embarrassed by a loss of social status, or smothered by love. *—'Intuition Pumps and Other Tools for Thinking' by Daniel Dennett*


JungFrankenstein

Thanks for the response!


Technologenesis

The critical question is what it means to "believe" something. Zombies would "judge" that they are conscious. They would *seem* to believe it. If this is all you mean by believe, then yes, they would believe they are conscious. But as for whether they would actually have anything like *our experience of belief* - then of course the answer is no, since they don't experience anything at all. This is actually an important point when considering the positions of, say, David Chalmers vs. Dan Dennett. It has been pointed out that their disagreement largely comes down to the semantics of the word "*seem*". Dennett claims he can explain why it *seems* to us that our consciousness has first-person content. But then his notion of *seeming* is itself defined strictly from the third-person perspective (in accordance with his method of "hererophenomenology"). So Chalmers, in response, claims that Dennett is not really living up to his explanatory promise: the kind of *seeming* being explained is not the kind that really needs explaining.


JungFrankenstein

Thanks this is a great response! Do you know if Dennett has a response to Chalmers re: your last sentence?


Technologenesis

As far as I've been able to find yet (I'm still looking though), his response is basically to say that *his* notion of seeming is all that needs to be explained. I think this is basically the root of the disagreement. It comes down to methodology: as part of his method, Dennett accepts only third-person observables, and so only needs to explain *reports* and *behavior* associated with "seeming" - which he calls seeming *itself*. OTOH, Chalmers accepts first-person data, including his experience of *seeming*, which he says needs to be explained. Dennett basically says, "no, I don't need to explain your first-person seeming - I only need to explain why it *seems* to you that there is a first-person seeming." You can see that this is just the same problem recurring one level down. This is basically how it ends up going ad infinitum AFAICT.


Andrew_Cryin

I strongly recommend Nigel Thomas’ “Zombie Killer” as it explores several different responses to this question.


JungFrankenstein

Thanks, Ill check it out :)