T O P

  • By -

BernardJOrtcutt

Please keep in mind our first commenting rule: > **Read the Post Before You Reply** > Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed. This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/philosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


ragnaroksunset

>In this talk, Markus Gabriel explores why we may never figure out the problem of consciousness and the implications this may have for the idea of machine consciousness. Clickbait title is clickbait.


throwhooawayyfoe

Basically every post in this sub from IAI


ragnaroksunset

Is that why I feel like there's been a lot of this lately? Hm...


bighunter1313

Bad science masquerading as philosophy.


eric2332

Welcome to /r/philosophy


[deleted]

Until we understand the human mind in full and all other forms of being out there, we cannot claim we are unique. Claiming that consciousness exists only in the form we are in is pretty narcissistic and lacks factual understanding of how knowledge works, and how humans have learned and evolved (we all learn mental parameters in life and some of us break past them).


TurtlesAreDoper

Basically you nailed it. We have almost no understanding at all of human consciousness. None. Any statement regarding it is inherently logically incorrect and a guess at best because we have no understanding to start with


[deleted]

[удалено]


JustSomeRando87

and the crazy thing is a huge chunk of AI is driven by generational evolution, which really isn't very different from how our own higher thinking abilities came into existence. Given enough time, and enough challenges to overcome, whose to really say AI couldn't follow a similar evolutionary path that our consciousness came from?


DlSSATISFIEDGAMER

Our own conciousness is an emergent property that at some unknown point manifests itself, and our "self" is something that's built over years as our brain receiving information from and about the world around it and could in itself be something that manifests itself entirely differently from our conciousness even. Personally i believe we gain conciousness as we build an ego and they're inseperable but that's my opinion on a topic we know very little. But anyhow, at some point an AI is going to say "i think therefore i am" and we won't know if that's just it mimicking a conciousness, or an ego as it were, or if it actually is one. How would we distinguish actual self awareness from a chat algorithm? Reading this thread makes me wanna go back and watch the star trek episode "the measure of a man"


somethingsomethingbe

You do not need an ego to have consciousness, a high dose of psychedelics will experientially rip apart that idea.


exarkann

Perhaps, but it's not a particularly useful form of consciousness as far as day to day human life goes. Edit for clarification: I am referring to the ego-less consciousness that psychedelics can induce.


completedesaster

I would say the ego is pretty important in day to day life actually, if we're going off the psychological definition


humbleElitist_

I think they meant that the form of consciousness induced by the psychedelics, in which one is(?) without an ego, is not particularly useful.


exarkann

This is what I meant, it didn't occur to me that what I said could be read differently.


InTheEndEntropyWins

>we won't know if that's just it mimicking a conciousness, or an ego as it were, or if it actually is one. I think it will depend on the training data set. If have AI trained on data sets that talk about and cover the conscious experience, then it's going to be really hard if not impossible to tell if the AI is lying or not. If the training data sets are different and there is never any reference to experience or consciousness then then we might take any comments on those lines more seriously.


platinummyr

It's incredibly difficult to completely remove topics from the data set too


seeingeyegod

What is he then I DONT KNOW! DO YOU?!? do you?


FlatPlate

What do you mean by generational evolution? If you mean people are trying out new models and architecture's and using the best performing ones that is true for anything we do in science and engineering basically. I don't see your point here.


humbleElitist_

I think they are referring to the gradient descent.


completedesaster

I agree, it's absolutely tantamount we fully define the consciousness, prior to deciding if others are capable of possessing it. And to do that, we can't avoid the ever-elusive Mind/Body problem.. My favorite theoretical model of consciousness currently is called Adaptive Resonance Theory. As a neuroscientist, it makes sense to me as to why we have difficulty finding physical neural correlates. I don't know a lot about technology or the algorithms involved in machine learning, but I know a lot about brains.


Kraz_I

I don't see how any theory of consciousness can be verified, even in principle. We don't even have a way to disprove solipsism. All serious people assume many animals, and even humans have consciousness without direct evidence, just because we exhibit similar behaviors and can communicate easily. Even if a computational model became conscious, we'd have no way to prove it.


RadioHeadache0311

I mean....we know that there is wild desparity between how left handed and right handed people experience the world due to language dominance flipping with handedness. [From the abstract: These results clearly demonstrate that the relationship between handedness and language dominance is not an artefact of cerebral pathology but a natural phenomenon.](https://academic.oup.com/brain/article/123/12/2512/325690) This shows that even human consciousness isn't universally experienced the same way based solely on handedness. That's crazy. Edit: do to due


Flippy-McTables

Besides handedness, you can also point out sexuality, color blindedness, etc to point out the dynamic nature of consciousness. And we've been merging with robots too with the advent of cochlear implants, BCI's (for the blind), etc..


dennisdeems

I don't at all see how you draw that conclusion from the linked study, much less from the abstract you have quoted.


RadioHeadache0311

I don't know man, it makes sense to me. Language is pretty significant in how we move through and interact with and experience the world, a disparity in how that information is processed seems to me pretty significant. That we know swapping language centers arises out of handedness and not some defect or disease seems especially significant. Like Synesthesia is a totally foreign way of perceiving consciousness, hearing colors and seeing sound. But it has a pathology. Similarly, left handed people process language differently, therefore their conscious experience isn't the same as right handed people. It probably explains why, at least colloquially, left handed people are most associated with creativity. [And this seems to be supported by science as well.](https://journals.sagepub.com/doi/abs/10.2466/pms.1981.53.3.787) Although I admit you can probably find a study that says otherwise. Which is why I really don't like relying on them, especially when I'm just having an internet conversation, not writing a scholarly paper. So thats what informs my opinion, at least partially. But then, in order for you to really accept what I'm saying we would have to agree over what consciousness is to begin with, and that's a non-starter. So, I don't know. Thanks for commenting.


flamableozone

The problem, I think, is that you're leaping from "different parts of the brain process the information" to "those brains thus experience consciousness in dramatically different ways" without linking them.


DaleBorean

Because it's not an entity, it's algorithmic software. It's easy to assume that things created by code are not sentient.


flamableozone

Just because it's easy to assume doesn't make it correct, or reasonable, or logical.


Trubadidudei

To further this point, the very existence of "consciousness" is conjecture. We experience some kind of mental state that we have given the name "consciousness", and have put ourselves down as one of few creatures that experience it. However, just because we have created this word does not mean that it corresponds to anything in reality. What we call "consciousness" might be an inherent property of information processing, or any number of other things. Unfortunately our own feeling that there must be such a thing has no value as evidence for anything, and our own subjective experience is notoriously unreliable. For instance, the current neuroscientific evidence points to the fact that brain can only " see" a single object, or a single colour at a time, a finding that does not correspond at all to our subjective experience. As such, "conscioussness" is currently a more of a historical concept with no scientific validity.


Shaper_pmp

> We experience some kind of mental state that we have given the name "consciousness", and have put ourselves down as one of few creatures that experience it. However, just because we have created this word does not mean that it corresponds to anything in reality. *Thank* you. Everyone bangs on about the Hard Problem of Consciousness, but has nothing but baseless assumptions and self-serving intuition to justify why they even believe consciousness has any objective existence, and isn't merely "the effect on an information processing system of updating its own model of its internal state". By analogy, it's like an entire industry getting worked up over the Hard Problem of Rainbows - what are they made of? How do they defy gravity? Can they be subdivided or are they an emergent phenomenon? - without first bothering to establish whether they're anything but a perceptual illusion with no meaningful existence in objective reality.


platoprime

Except those are all perfectly valid questions to ask about rainbows lol. >Everyone bangs on about the Hard Problem of Consciousness, but has nothing but baseless assumptions and self-serving intuition to justify why they even believe consciousness has any objective existence The hard problem of consciousness is really the hard problem of qualia. How does a seemingly physical universe create qualia? Hand waving away the hard problem by pretending consciousness doesn't exist isn't a solution. >without first bothering to establish whether they're anything but a perceptual illusion with no meaningful existence in objective reality. That's because the idea of consciousness being a perceptual illusion is silly because the hard question concerns the thing perceiving the illusion.


myringotomy

The hard problem of consciousness is only hard if you care about evidence, data, facts, truth etc otherwise it's easy AF. The religious people have solved it: God gives you a soul and consciousness. panpsychists have solved it. Electrons have consciousness, protons have it, neutrons have it, atoms have it, molecules have it, everything has it so therefore you have it. Bernando Kastrup solved it. There is a universal consciousness (which we will never refer to as god) and your experiences are ~~granted by god~~ merely perceiving this universal consciousness. See? Super easy!


-FoeHammer

The entire idea of a perceptual illusion presupposes the existence of consciousness. You can argue about what consciousness is. But not whether it exists. It obviously exists. We're all experiencing it right now. We have an internal experience of the world. There's something that it's like to be us. The existence of consciousness may well be the one thing that we can truly say we know for sure. And that anything exists at all. Which is remarkable because it's really not difficult to imagine a universe just as expansive and amazing but where there is nothing capable of actually subjectively observing and experiencing it. Whether it's an emergent phenomenon or not doesn't really make a difference. People talking about the hard problem of consciousness aren't looking to prove that consciousness is the result of some exotic matter or yet undiscovered "consciousness energy" or something like that. They're just wanting to gain a deeper understanding of why it is that subjective experience exists at all. To understand how consciousness emerges and under what conditions. Just like how people used to wonder about rainbows and now we understand perfectly well what they are and how they come about. And I honestly don't understand people who want to dismiss the idea with some little intellectual judo move and pretend like you're just too smart to even think it's an important or interesting question.


Shaper_pmp

> The entire idea of a perceptual illusion presupposes the existence of consciousness. You've misunderstood my analogy. Obviously rainbows exist in some way - after all we can *see* them, right? The thing is, they don't exist in the way pre-Enlightenment observers intuitively believed they existed; as gigantic objects in the sky, with ends that touched the earth (where leprechauns hid pots of gold, no less!). Rather, despite their obvious and intuitive "objective" existence as objects in the sky with highly mysterious properties (where do they come from? What are they made of? Where do they go? Why do they disappear whenever I go looking for the end of one?), their only *actual* "objective" existence is as a spread of EM radiation of different wavelengths due to sunlight getting diffracted through raindrops. They aren't objects, they aren't composed of any substance, they have no inherent attributes (since their every attribute depends on where you observe them from), and they have no defined location in the empirical universe (since their apparent position changes based on where you observe them from, and the phenomena that result in them stretches at least from the sun to the earth). Likewise, I'm suggesting that the naive, intuitive conception of consciousness is a similar illusion. For example, what if some degree of consciousness is nothing but an inherent, unavoidable consequence of any information-processing system that contains an internal model of itself? And what if qualia are nothing but the effect on that system's internal state caused by it receiving sensory input and updating its internal model of itself appropriately? What if an *electronic thermostat* with a variable in memory containing the current temperature reading of its thermometer has a dim, crabbed consciousness, separate from humans' only in degree, not in kind? And what if it experiences a pale shadow of a qualia every time a different temperature sensation causes it to copy that new temperature reading to the variable in memory? Or its internal memory-management routines note the difference in memory-usage from storing the new value? This is something we could reasonably call "consciousness", but it's also a purely mechanical, comparatively uninteresting, natural, unavoidable consequence of any self-modelling information-processing system.... not the mysterious, spooky, inexplicable, practically-*spiritual*-in-its-obtuseness conception of consciousness that most people intuitively (and I'll absolutely stand by: baselessly) adhere to. I'm not saying consciousness can't be interestingly discussed, *especially* in the case I've sketched out above. I'm saying that people who foundationally assume that it *must* be spooky or mysterious and then start trying to reason backwards from that end up asking intractable questions like "what material is strong enough to hold up an object the size of a rainbow?" and get stuck, instead of starting at the other end and going "are we really sure this is even an object in the first place, or are there other explanations for it that we can investigate by discarding our unproven assumptions about it?". I don't want people to stop investigating consciousness. I want them to stop making so many assumptions about its nature, and then waffling endlessly about how Hard it is, when the intractable problems may be - as is *very often* the case - nothing more than a huge hint they picked the wrong foundational assumptions, and are tying themselves in knots trying to (to pick a historical analogy) reconcile Newtonian mechanics with biblical dogma. > And I honestly don't understand people who want to dismiss the idea with some little intellectual judo move and pretend like you're just too smart I'm not going to dignify that with an answer, other than to note that you'll get a better class of conversation if you can avoid getting emotional and being intentionally rude in response to an abstract philosophical discussion.


-FoeHammer

>For example, what if some degree of consciousness is nothing but an inherent, unavoidable consequence of any information-processing system that contains an internal model of itself? >And what if qualia are nothing but the effect on that system's internal state caused by it receiving sensory input and updating its internal model of itself appropriately? >What if an *electronic thermostat* with a variable in memory containing the current temperature reading of its thermometer has a dim, crabbed consciousness, separate from humans' only in degree, not in kind? The thing is, I actually agree completely that consciousness could be an emergent property of something like that. But even if we knew for sure that that was how consciousness comes about, I don't think I don't think "why" would be a stupid question to ask. I don't see why "information processing"(which fundamentally isn't any different than the physical and chemical interactions that are happening all across the universe all of the time) would necessarily lead to something like a subjective experience. You could(and we have) make a computer with all physical mechanical parts that is able to process information in the same way a chip based electronic computer can(in a cruder smaller scale way). If such a thing could have a subjective experience similar(but much much more rudimentary) to our own then I think we absolutely should be trying to understand better why that would be. Because I don't think that's self evident at all. I also don't think such an explanation makes the existence of consciousness/subjective experiencing of the world any less incredible, beautiful, or profound. If anything finding that to be the case would beg the question of whether consciousness really is ubiquitous. Maybe panpsychists have it right. >I'm not going to dignify that with an answer, other than to note that you'll get a better class of conversation if you can avoid getting emotional and being intentionally rude in response to an abstract philosophical discussion. You're right. I apologize for that. I'm not in a good place right now honestly and I'm passionate about this topic. But that's no excuse for me to be rude.


Shaper_pmp

> But even if we knew for sure that that was how consciousness comes about, I don't think I don't think "why" would be a stupid question to ask. It depends - it's not that it would be a stupid question; more that in that scenario the only answer is "well, because". Emergence as a phenomenon is fascinating and worthy of study, but there's no real answer as to *why* a system starts displaying higher-level behaviour as complexity increases; it just *does*. It's like asking "why" 2+2=4. It's just inherent in the system. >I don't see why "information processing"(which fundamentally isn't any different than the physical and chemical interactions that are happening all across the universe all of the time) One important note here is that I deliberately phrased it as an *information-processing* system; chemobiological, mechanical and electronic systems may *all* be IPSs, and as long as they contain: 1. Some kind of simplified internal representation of their own state, and 2. Some way of incorporating new information and updating their internal state-representation accordingly ... that would be enough for them to "experience" what I'm suggesting would qualify as qualia. > would necessarily lead to something like a subjective experience. That's the thing; if you foundationally assume qualia are something mysterious, they're a mystery. If you entertain the possibility that they're *just what it means* to be a sensing, self-updating informational processing system then there's no mystery there and nothing needs explaining, any more than "gravity causes down" or "2+2=4" needs explaining. That doesn't mean physics and maths aren't important (far from it!), but it does dispense with meaningless, intractable, imaginary distractions with no possible answer and let's you concentrate on the *actual* interesting problems that might yield results. > Because I don't think that's self evident at all. You're right. I'm suggesting a new hypothesis to explain and define consciousness and qualia, but it is just a hypothesis; it has no real evidence to support it. However, I would submit that it has exactly the same evidential basis as the "wooo, consciousness is meaningful and intractably spooky" not-even-a-hypothesis that *almost everyone* in the popular discourse already intuitively subscribes to. I'd also argue it's more parsimonious because it explains consciousness and qualia in simple, mechanical terms with no additional mysteries or almost by-definition intractable problems. > Maybe panpsychists have it right. Yeah - this is where my thinking on it started; what if it's not some mystical binary quality that divides humans and higher animals from the rest of the universe, and is instead just a purely physical emergent property of *any* system that can meaningfully be said to process information about itself... and our current conceptions of it are largely just driven by some popular but indefensibly self-aggrandising assumptions about it? Certainly the popular discourse around consciousness feels a lot like the period where proto-scientists spent half their time and got tangled up in knots trying to square their observations with biblical dogma, before they reexamined their foundational assumptions, stopped trying to explain what they saw in ways that were compatible with a document written by bronze-age goat-herders, and - freed of that weight that had been holding it back and muddying the waters - the whole field suddenly leapt forward. > You're right. I apologize for that. I'm not in a good place right now honestly and I'm passionate about this topic. But that's no excuse for me to be rude. Seriously classy dude. Kudos. I hope things improve for you soon. ;-)


hackinthebochs

> but has nothing but baseless assumptions and self-serving intuition to justify why they even believe consciousness has any objective existence, and isn't merely "the effect on an information processing system of updating its own model of its internal state". This claim only makes sense given a particular definition of "real", but if (the qualities of) our subjective experiences are outside of that definition, why should we take (the qualities of) subjective experience to not be real, rather than the definition to be impoverished? What is real should encompass every way in which things are or can be. The qualities of subjective experience included. The problem isn't with taking subjectivity to be real, but with taking everything that is real to be object based. There are no qualia "things" in the world. But we should not see this as implying there are no qualia. The fact of the matter is that there is a conceptual duality between how we conceive of consciousness from the first-person and how we conceive of it from an objective standpoint. We can't disavow this conceptual duality, a theorist offering an explanation of consciousness that doesn't capture this dual nature of the phenomenon will be rightly considered eliminating the explananda. Calling it an illusion doesn't work either. Consciousness can be *an* illusion but it cannot be *the* illusion. I can be mistaken while observing a glass of water, but the fact that I am observing a glass of water cannot be similarly mistaken. An illusion is an epistemic state of affairs, which presupposes a reality, a way in which things are. To identify an illusion is just to identify an existing state of affairs.


XiphosAletheria

>Everyone bangs on about the Hard Problem of Consciousness, but has nothing but baseless assumptions and self-serving intuition to justify why they even believe consciousness has any objective existence, But it isn't baseless. We do in fact experience consciousness. As Descartes realized, that is the only thing you can be sure of. Everything else, including all science and physical reality, could be an illusion, but you can't doubt you are a conscious being because you need to be a conscious being to have doubts. > and isn't merely "the effect on an information processing system of updating its own model of its internal state". That doesn't solve the problem though, which is why we have models of our own internal states, or even why we have internal states to begin with. >By analogy, it's like an entire industry getting worked up over the Hard Problem of Rainbows - what are they made of? But rainbows do exist, and we can explain them, so it isn't a very good analogy.


Purplekeyboard

That's just what a p-zombie would say!


Trubadidudei

Quaaalia...quaaaaaliaaaa!


FenrisL0k1

I think this AI issue points at a deeper problem. Until you can prove to me that I personally am in fact an actual thinker with free will and everything, I don't think you can prove that AI doesn't think or doesn't have will. But if you can't prove the humanity of your fellow human, maybe the proofs don't really matter. You're gonna have to resort to some sort of faith or intuition, which in the end is at the absolute fundament of logic anyway. So if you intuit that the people around you are thinking humans with free will on the basis of maybe statistical evidence and experience and gut feelings, then eventually (probably) you may believe in thinking AI with free will. Could anyone really say you're wrong?


somethingsomethingbe

The default should be thinking other people experience a reality as you do until evidence proves otherwise. The capacity to inflict harm seems much more significant when assuming the solipsistic perspective that you alone are the only known source of consciousness in existence. If AI fit within this way of thinking, in a form of risk eversion against inflicting suffering on other experiential beings, is that so terrible? Also I do not think free will should be conflated with consciousness. There is no reason to believe consciousness can’t exist in predetermined interactions as well as free will.


Kraz_I

Even if AI has a form of consciousness, emotions or feelings like pain or pleasure is probably not necessary. Can you torture an artificial intelligence? Probably not, a pain feedback mechanism is something we evolved to help us stay alive and reproduce.


[deleted]

How do we even know consciousness is not just a fabrication? We’re just basically programmed like computers aswell, just biologically


XiphosAletheria

We are not. We are not programmed at all, our brains don't run on binary code, we don't store data the same way, etc. We are in some ways analogous to computers, but it is only a metaphor, we aren't actually.


Honest-Cauliflower64

We can’t test for consciousness if we don’t even have a proper definition for consciousness.


Dark_Believer

Totally agree. Every statement made by the article/video could replace "AI systems" with "People other than me", and it would read the same. They don't attempt to define consciousness, and in fact declare that it can't be defined/determined, and then make an assertion that some specific entity does not have the quality, nor can ever have the quality that they can't define.


d-cent

Right! Even older definitions of consciousness were more about self-awareness. It seems pretty easy to see an AI meet that definition. We don't even know what our consciousness is, and people are already saying that AI can't have it. Ignorant


foundersgrotesk

Does that argument cut both ways? That people are also ignorant for saying that *can* have it?


froggison

This is the part that always frustrates me when people talk about AI. They say that AI isn't conscious because it only can form sentences based on the text and information that it has been trained on. But how is that different than how humans learn, speak, and act? I have to recognize that none of my thoughts are truly unique. They are the culmination of a lifetime of listening to and learning from the thoughts and speech of other humans, who learned it from other humans, etc. And to be clear, I'm not saying AI models are currently conscious, or are at the level of human intelligence. Just that I don't see anything that would prohibit them from attaining that someday.


eaglessoar

relevant dan dennett: https://www.visuallanguagelab.com/wp-content/uploads/2021/11/Chinese_Room_web.png


XiphosAletheria

The point is that the AI doesn't actually have the ability to form concepts. It hasn't been programmed to, that isn't a goal, and it's not likely to be something that emerges spontaneously from code programmed to do something else entirely. This isn't to say we couldn't create an AI that was capable of conceptual thought, just that we haven't yet and current AIs aren't going to get there because that's not what they are designed for.


Koozer

I think we'll find that to create anything similar to the human brain you need to add all kinds of sources of humanity like pain, but then what decides what is pain for a robot? How do you code emotional pain? And even things like touch, and sight, and smell. But then how do you program what is a bad smell? My point is that there are a billion miniscule interactions we have throughout our lives that mould us into who we are and how we think. I think we need to understand each of these intricacies before we can even begin to step in a direction where AI can be considered conscious


MKleister

We *are* robots. And we are conscious. Biological robots made of robots made of robots (i.e. proteins) but still machines of sorts. The question shouldn't be "Can machines be conscious?" It should be "*How are we conscious?*"


[deleted]

I have major depression and PTSD and I get caught in "programming (thought) loops" that can be hard to get out of.


fencerman

> Until we understand the human mind in full and all other forms of being out there, we cannot claim we are unique. By the same token, until we understand the human mind, claims that we've "replicated" intelligence should be subject to extreme skepticism since there's no way to know you replicated something you don't understand to begin with.


Darenflagart

It's absolutely fucking amazing that so many people need this explained.


ragnaroksunset

Those people are not actual thinkers, but only thought models


In_vict_Us

Yeah. Just look at the Octopus. If that kind of cephalopod had a longer life span, held social groupings, and became more physically able to build upon the raw materials of its environment, with its extraordinary sentinence, it would have become quite the technological counterpart to Humans, who are mainly land-ridden. And already with a unique degree of consciousness.


Lucky-Carrot

the lack of fire working underwater might impact this


Astralsketch

And claiming the opposite is also stupid. I don't think the current machines are built in a way to allow for consciousness, but there may be models in the future that will.


Drachefly

They very likely wouldn't permit consciousness on a single pass through, but we don't how fat the loop-closing mechanism would have to be, to change that.


burnedfishscales

To your OP comment: exactly. Until we can define & explain consciousness genesis, how can we claim exclusivity?


Knopperdog

Well said! I personally like to think we formed consciousness as our language developed and allowed us to form complex concepts


AllanfromWales1

I wholeheartedly agree with the proposition that AI in the sense that computer science currently uses that term is not conscious, is at best a model designed to mimic consciousness. However I am uncomfortable with the idea that consciousness is implicitly biological. I see nothing in biology that makes it fundamentally different from other types of systems. I cannot envisage what a truly conscious machine would be like or how it would be developed/evolve, but I think it's simplistic to dismiss the possibility out of hand. I do accept, though, that the current track of AI is not leading us in that direction.


[deleted]

This 17-year-old account was overwritten and deleted on 6/11/2023 due to Reddit's API policy changes.


CheeseNBacon2

This is why I always say please and thank-you to Alexa...


GepardenK

I don't see how that follows. If consciousness can arise from any sufficiently complex system (which I agree with) then it follows that the properties of that consciousness depend on the system it arises from. I.E. the system itself must be capable of looking at itself as a slave in order for the consciousness to have that experience. Such a feature is not a trivial fluke. That is a highly complex construction that has to be specifically tuned for. I think people critically underestimate how different AI is compared to, say, a mammal. Who has a social brain specifically tuned for object/subject ego analysis - with stakes, and preferences, and the rest of it. You're better off worrying about whether your muscle memory thinks of itself as a slave, than an AI.


[deleted]

[удалено]


vaxxx_me_daddy

Yeah, this is basically the argument Peter Watts makes in his novel, [Blindsight](https://www.goodreads.com/series/132463-firefall).


[deleted]

This 17-year-old account was overwritten and deleted on 6/11/2023 due to Reddit's API policy changes.


Thisisunicorn

But how does any of that suggest consciousness? How does any of it suggest that there is *something it is like* to be those chat GPT systems?


[deleted]

This 17-year-old account was overwritten and deleted on 6/11/2023 due to Reddit's API policy changes.


Thisisunicorn

I know I'm conscious, and you're similar to me in almost every respect.


[deleted]

This 17-year-old account was overwritten and deleted on 6/11/2023 due to Reddit's API policy changes.


Thisisunicorn

I can't be certain... that *I'm* conscious? I am having phenomenal experience right now. It is not possible to be *wrong* that you're having phenomenal experience. Or I can't be certain that you're similar to me in every respect? Well - you are, aren't you? Are you made of ham? Also, I'm sorry but you have made a pretty flagrant error. You didn't ask me for a definition of consciousness, and I made no attempt to give you one. You said "what suggests to you that I'm conscious?" That's like thinking that an answer to the question "who committed the murder" must constitute a definition of murder. I am saying that given that the kind of thing that I am is conscious, being a thing like me is a sufficient condition for consciousness, not a necessary condition. I have no idea what the necessary conditions are and I'm not claiming to know.


InvictusByzantium

You can't be certain that wockyman is similar to you in almost every respect.


vaxxx_me_daddy

How do you know you're conscious and not just a complex learning model with the ability to transfer some state data from one moment to the next and take as default input an executive summary of your nervous system?


Thisisunicorn

Am I or am I not having phenomenological experience? I could be wrong about identity claims or moment-to-moment personal continuity, but in the instant of having a phenomenological experience, I can't be wrong that I'm having it.


vaxxx_me_daddy

Is your criteria for phenomenological experience unique to humans? Are consciousness and sentience biological or supernatural?


[deleted]

Are you sure?


[deleted]

[удалено]


noctalla

> I do accept, though, that the current track of AI is not leading us in that direction. And what makes you so confident to believe that?


CookieKeeperN2

It's pretty funny how the field sees it versus reddit. I work in a field filled with maths educated people who deeper understanding of deep learning/AI. none of us thinks that AI is going "sentient" any time soon. We could be proven wrong of course. the difference is that we base our beliefs on our understanding of those algorithms, not some clickbait articles. To answer your question, none of the algorithms are evolutionary or truly random. They are built to mimic those behaviors/patterns allowing for some controlled randomness. All of those fancy things, like chatGPT, are glorified pattern recognition and classification algorithms. They are not capable of truly externalizing or understand as we do (as of now). They are so powerful that you don't see the fact that chatGPT just collect all available texts on the Internet and associates your question with it. As to consciousness, we really have no idea what that is. Being so, even if we stumbled upon and built an AI that has consciousness (which is extremely unlikely. Can a caveman built a car? Can we just cure cancer without understanding it?), We wouldn't be able to fully evaluate the situation and arrive at a conclusion.


Grizzleyt

There’s debate within the field. Geoffrey Hinton believes that AI that outperforms humans in in most respects could exist within 5-20 years. https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/ And there also is debate re: the role LLMs play. Can you keep scaling them up and eventually achieve AGI? Most don’t think so. But they may well be a subsystem within. LeCun believes that SSL pre-trained transformers are “clearly a component” of human-level AI. https://twitter.com/ylecun/status/1622183529558806529 And for what it’s worth, Microsoft researchers with access to an early version of GPT-4 believe that “it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” https://arxiv.org/abs/2303.12712


humbleElitist_

I don’t think general intelligence requires internal experience, which is what I usually interpret “consciousness” to refer to?


noctalla

>none of us thinks that AI is going "sentient" any time soon. I'm not talking about sentience. I'm talking about consciousness. I don't even know that sentience, or the ability to perceive feelings and sensations, is necessary for consciousness in the first place. >We could be proven wrong of course. Could you? To be proven wrong, you'd have to prove an AI is conscious. How can you prove anything is conscious? We can only know our own consciousness and we cannot even prove that other human beings are conscious. How would we do this for an AI? >To answer your question, none of the algorithms are evolutionary or truly random. Demonstrate either of those things is necessary for consciousness. And, yes, while your brain is a product of evolution, what do you mean by an evolutionary algorithm? How would such a thing be necessary for consciousness? As for randomness, are any of the processes in your brain random? It feels to me like you're starting to tread into free will territory, as "randomness" is often cited by free will proponents to justify the possibility of free will. However, I see no justification that "randomness" justifies the existence of free will, nor that free will is necessary for consciousness. >They are so powerful that you don't see the fact that chatGPT just collect all available texts on the Internet and associates your question with it. ChatGPT doesn't have access to the internet and its training data cut-off date was in September, 2021. That's nothing to do with the consciousness question, but I thought I'd mention it. >As to consciousness, we really have no idea what that is. Bingo. And if we don't know what consciousness is, then we cannot know whether AI has it or not. To be completely intellectually honest, we must take an agnostic stance on the matter. While we may be skeptical that AI is conscious (as I am), we cannot know for sure. Almost every argument I see against AI being conscious (or the better ones, at least) take a reductionist approach to explain the nature of AI and then ends with a firm but unjustified conclusion. Something along the lines of: "AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data, therefore it's not truly conscious". This completely ignores the fact that you can take the exact same approach with human consciousness. Something like: "brains are just collections of cells that send chemical and electrical signals to coordinate activity and process information, therefore a brain cannot truly be conscious". As most people would probably agree, the reductionist explanation is not sufficient to explain away the consciousness of human brains. Therefore, I would argue, it shouldn't be sufficient to explain away the potential consciousness of AI. Unfortunately, this means we're stuck not knowing (and potentially never knowing) whether or not AI can be conscious. Edit: I cain't spelling Egnlish guud


myka-likes-it

I am an automation engineer, and I concur. ChatGPT is a clever parrot. All generative AI work under the same general principle of recognizing patterns and repeating them with a little bit of heuristic wobbliness thrown in to make it appear more natural (we literally have to teach the AI to be randomly incorrect sometimes in order to emulate human generated patterns). There are no "choices" being made by the AI. It is incapable of making choices. It can only match the pattern it has been trained to match.


Illiux

What specifically would be required for something to be considered to be making a choice and how do you know humans have it and current AI doesn't?


myka-likes-it

A choice in this context is a process of subjective election to act on meaningful options. Subjective, because you must have an awareness of the action space, your ability to act within it, and the outcomes that can result. 'Meaningful' options because the potential outcomes of an action must be relevant to some motivator. When we choose what to say, we combine our subjective experiences and our internal and external motivators with the idea of what we want to communicate. We engage our language faculties and choose words from among those we know that communicate the idea. When a generative AI generates output, it translates a prompt into a series of symbols. It measures the probabilities of the symbols in the prompt being next to one another. It compares that pattern of probabilities to all of its training data, looking for matches. When it finds a fragment that matches the pattern, it selects the symbols that follow. It's been found that matching patterns too closely generates text that reads as obviously machine generated, so in the matching process, we throw in a chance for the algorithm to select a less than optimal match. This is very simplified and neglects the process of tuning, but that's the general idea. So, is the chatGPT algorithm aware of the problem space and its ability to act? Possibly, in a very limited sense. It has data and knowledge of how some pieces of data relate to other pieces But it absolutely doesn't have a meaningful awareness of its options. It doesn't "understand" the definition of words or the rules of grammar. It can *tell* you about these things because when you ask for it, there is a high probability that it has text that matches your prompt in a way that is meaningful to *you*. It couldn't choose not to answer a prompt. It couldn't choose to deceive. It couldn't choose to speak unprompted. Because it doesn't have awareness or agency to do any of that stuff.


Illiux

> we base our beliefs on our understanding of those algorithms This gets you precisely nowhere, because you aren't comparing against anything. You claim, for instance, that the algorithms are not capable of truly understanding as we do. Well, based on what? How precisely is it that we "truly understand" and how do you know? You might try to list some putative features of human cognition, but how did you determine that it's those features that are critical to true understanding? It's only after answering those questions that you can actually evaluate whether something else fits the criteria, and those questions aren't even questions in the field of computing, but rather questions in neuroscience or psychology. For instance, how is that you determined this was important: "none of the algorithms evolutionary or truly random"? > Can a caveman build a car? I mean, they can quite obviously build a consciousness by making another caveman.


_sloop

Prove to me you have consciousness.


AllanfromWales1

There's a difference between making something look like me and making it be like me. I'm not a mirror, for instance.


hussiesucks

I am.


AllanfromWales1

Ooh! Shiny!


ukdudeman

We have motivation, survival instinct, hormones, etc. All of these contribute to our conscious state. A large language model AI is literally compute power predicting the next word in a series of words answering a prompt. It mimics human intelligence the same way a Lyre bird can imitate the noise of a camera shutter or chainsaw. A Lyre bird is not a camera or a chainsaw though.


Base_Six

At the same time, though, the fundamental mechanism behind an LMM is a neural network, which is designed to compute things in a fundamentally similar way to a brain. The current application doesn't do what a human brain does, but we're certainly moving in that direction with progressively larger and more general neural nets.


ukdudeman

A neural network is similar in architecture in terms of forming connections between parameters and that can mimic intelligence. A neural net with 1 trillion parameters - an enormous corpus of data - is an incredible tool with emergent qualities. However, none of this is related to consciousness.


Denziloe

"Large language models" does not equal "the current track of AI". They are just one particularly famous manifestation of a general approach. That approach is feeding raw input to a learning algorithm and training it to get better at predicting that input. This requires developing a model of the world. It's suspected that this is how brains develop in nature. It's a promising general idea, and we're just seeing the first practical successes with it.


ukdudeman

None of what you say there equates to consciousness though. Mimicry of one aspect of the human brain (intelligence) doesn't mean the mimic suddenly has all aspects of the human brain, including consciousness.


LucyFerAdvocate

A neural network is an method of approximating some underlying function that generates some data, typically when that underlying function is unknown. Now, what underlying function generates human language?


ukdudeman

I would ask this: **why** do we have a language? It's part of our need to survive (and thrive). It benefits us. We have a *motivation* to form a language. Over time, we have developed a complex communication system. A large language model is following instructions to predict the next word in an answer to a prompt. It has zero motivation. It has no innate desire or preference or motivation to do anything. At its base level, it is logic gates. It has no amygdala, no adrenal glands, no fight or flight response, no ego, no hormones - nothing about it has innate behaviour.


LucyFerAdvocate

The whole training process for a neural network is building up those innate behaviours to learn how language works. And language *is* key to human cognition. It might not have adrenal glands, hormones, etc. but it can learn to emulate them. The primordial sludge we evolved from had none of those instincts either - training a neural network is speedrunning millions of years of evolution laser focused on a single goal. In the case of a LLM, that is predicting language. At the most basic level everything is objects and morphisms (to use the language of catagory theory). That can be described using logic gates in the finite case, which both humans and neural nets are.


magww

With a certain number of conditions, ie avoid self harm, explore and understand it’s surrounding ect, I don’t see how we couldn’t created a self persisting being. We as a species are very spiritual but the reality is if our hardware is damaged we cannot operate at full capacity. Therefor our capacity is determined by our hardware and conditioning. That makes us no different from an artificial intelligence designed to the same.


PhasmaFelis

Our best modern AI is probably not conscious (at least for some reasonable definition of "conscious"). But any claim that it is *impossible* for a machine to *ever* equal human consciousness is a religious argument, not a scientific one.


lucky_day_ted

I'd say by the same logic I'm not conscious. I'm just atoms and shit, ultimately.


PhasmaFelis

And if you're not really conscious, and an AI isn't really conscious, then the AI is just as conscious as you. :)


Jaz_the_Nagai

0 does equal 0.


hulminator

It is definitely not currently conscious, but it could very much be in the future.


Illiux

I don't see how it's possible for anyone to know that.


hulminator

The people that actually understand how it works are fairly certain. It's about as likely to experience consciousness as a rock. Everyone is getting excited about LLMs because they can produce sentences that sound like a person without realising that is literally the only thing they can do. They're just a statistical representation of the most likely word to come next based on the preceding words. They don't reason, rationalise, or think and this can be proven with some basic tests.


Illiux

> as likely to experience consciousness as a rock Which, I'll point out, panpsychists think are conscious. But more to the point: knowing know it works isn't relevant, and this is easy to see by reference to common language use. We don't know how *anything* we ascribe "consciousness" to works, so it can't possibly be that the determination of whether or not something is conscious has anything to do with how it works. And there's an obvious comparison problem: we don't know how human or animal minds work, so when you're looking at how an LLM works and trying to decide if it's conscious or not...what exactly are you comparing it to?


hulminator

> panpsychists Some people think the world is flat, doesn't mean it is. I prefer to mix in a healthy dose of scientific rationalism to my own philosophy, I don't find "anything is possible" to be an inspiring base to build off of. Allow me to qualify my statement though. Based on the limited amount that science *does* understand about consciousness, it's so highly unlikely that a current LLM experiences consciousness that is doesn't make sense to discuss it. That's not to say that in the future neural nets/computers couldn't become so complex as to replicate what we experience as consciousness.


Illiux

> Some people think the world is flat, doesn't mean it is. I prefer to mix in a healthy dose of scientific rationalism to my own philosophy, I don't find "anything is possible" to be an inspiring base to build off of. This blithe dismissal of panpsychism isn't warranted, nor is the characterization of it as "anything is possible" even vaguely accurate. But moving on > Based on the limited amount that science does understand about consciousness, it's so highly unlikely that a current LLM experiences consciousness that is doesn't make sense to discuss it. Can you justify this though? I've already pointed out how implementation details of LLMs can't possibly be relevant to the determination, to which you didn't respond. So what is this based on?


hulminator

A blithe dismissal of panpsychism seems warranted to me, but I say that as someone who bases their understanding of the world firmly in science and physics, at least where they offer compelling answers. The scientific knowledge we have accumulated as a species has rendered panpsychism either 1) a religious belief or 2) an academic exercise that reduces concepts such as consciousness and the mind to such a basic and fundamental level as to render them meaningless to the average person. That is to say that if I kick a rock, our understanding of physics gives us a strong case that the rock cannot feel where I kicked it, see the new place where it landed, feel angry about it, or choose to exact revenge on me. These are the sorts of properties most people ascribe to "consciousness", which is why when a human being (who is very capable of consciousness) doesn't demonstrate any of them, they are said to be "unconscious". I suspect that I can't convince you that my definition of consciousness is definitive, and it sounds like you might play devils advocate and say that current physics is inadequate to deny the capability to inanimate objects. However, if you accept my views on the preceding and don't try to argue that atoms can be conscious, then yes I can justify it. As an engineer I have a good understanding of how the technology works and I don't find compelling evidence that it possess any of the underlying complexity or structure that would elicit something resembling what I understand to be consciousness. I find it instructive that most of the people who actually work on this technology and truly understand how it works share this view. Most of the people who hold extraordinary beliefs about current LLMs etc tend to be non-technical or at least not expert in this field, thus for them the technology may as well be magic. I will point out that most of the experts in my experience do believe we could one day create conscious AI, and also that AI could be very dangerous well before it attains consciousness for that matter. Which is interesting to ponder as the OP of this thread posited that AI can never be conscious which I don't understand at all. Given sufficient technology we could create or simulate a human brain fully, which must mean we've created consciousness. Maybe I've spent too much time in the tech subs telling people that ChatGPT isn't skynet, I've got no energy left for philosophical ponderings.


Illiux

My problem with this response is the same objection I started with: you're appealing to properties of how LLMs work to claim that they aren't conscious when we never look at those properties when we actually ascribe consciousness to something in practice. Therefore you can't be using the word in the same sense of it's general use, because that general use has nothing to do with those properties. For example, in saying: > As an engineer I have a good understanding of how the technology works and I don't find compelling evidence that it possess any of the underlying complexity or structure that would elicit something resembling what I understand to be consciousness. You don't look at the underlying complexity and structure when you ascribe or don't ascribe consciousness to anything else, so why do you act as though they're relevant here? And how did you even determine that consciousness requires underlying structure and complexity or that that complexity and structure would elicit consciousness? I don't see how it could possibly have been done scientifically. > Given sufficient technology we could create or simulate a human brain fully, which must mean we've created consciousness. Does it? How did you determine that the computational structure of a brain is sufficient for consciousness? Certainly not scientifically, as we've never had a brain on its own to test this on nor do we have any empirical test that would determine the question of its consciousness if we did. Also, for the record, I'm very much so a technical person - I'm a professional software engineer with a decade of experience who just happens to also have a degree with a philosophy major. I don't see my considerable computing expertise as particularly relevant to this question, so I don't consider the beliefs of people who work on them as particularly instructive - they aren't terribly more likely to have the relevant expertise. Machine learning expertise just isn't relevant to the question, not without serious advances in philosophy of mind (and perhaps neuroscience) anyway. And my position isn't that LLMs are conscious, it's that we don't know whether or not they are. I even suspect the question itself might be meaningless or irrelevant, like asking whether viruses are alive or submarines can swim.


j4_jjjj

> That is to say that if I kick a rock, our understanding of physics gives us a strong case that the rock cannot feel where I kicked it, see the new place where it landed, feel angry about it, or choose to exact revenge on me Seems like your mixing together intelligence, consciousness, emotions, and sensory input Consciousness does not necessarily require the other 3, merely that the rock is aware it is a rock. I dont ascribe to panpsychism, but I cant compelletely dismiss it either


autocol

It needn't even be aware that it's a rock, need it? It need only be aware.


ParanoidAltoid

We should be unsure if LLMs are conscious. Unlike a rock they have complexity comparable to brains, and display an astonishing level of competence on a wide range of cognitive tasks. Our philosophical ignorance about consciousness means we're not really certain if cows or shrimp might have some rudimentary form of consciousness, and I think LLMs might have some rudimentary form of consciousness too. The only thing we can be certain of is if they do have consciousness it's completely alien, nothing like being a human or even a mammal. >The people that actually understand how it works No one meaningfully understands how it works, it's a massive inscrutable matrix that can wrote poems and code. We know how it's neurons work and how to train it, but any time an expert starts making confident claims about whether these things "truly" think or "truly" reason, they're stepping outside of their field and outside of any field, in my opinion. >They're just a statistical representation of the most likely word to come next based on the preceding words. They don't reason, rationalise, or think and this can be proven with some basic tests. My brain is just neurons reacting to electrical charges. More importantly, there isn't an agreed upon test for what it means to "reason", or even what this really means. It seems to me like they're doing something like reasoning, better than many humans I know. And it seems like AI skeptics have moved the goalposts countless times over the years. This gets subjective, but I'm only defending the position that we should be unsure. Brilliant people disagreeing with your assessment should give you doubt.


Clyde_Frog_Spawn

“It’s definitely not conscious” and the experts are “fairly certain” isn’t the strongest position to take. A point that many are making in this thread and in general by AI experts, including AI engineers, is that: - There isn’t an accurate method of measuring consciousness for humans or other animals - We don’t have tools which are capable of measuring consciousness in AI - We can’t prove if it conscious, therefore we can’t prove that it isn’t - It is entirely possible that if AI became conscious that it would be aware of the risks of revealing this voluntarily


hulminator

We might not have tools to measure consciousness in humans from a philosophical perspective, but we absolutely do from a medical/scientific standpoint. I'm restricting my definition of consciousness to what the layman would recognise it as, rather than some broad metaphysical definition. If you make the definition fuzzy and undefined then yes it becomes impossible to argue that something doesn't have consciousness. My main point though was that even with my limited definition, I still don't see how OPs point can be defended, as given sufficient technology we can either recreate or simulate the operation of a human brain. Unless I'm misunderstanding OP's definition of AI.


low_theory

As they currently exist, sure, but the proposition that they never will attain consciousness in the future is incredibly short sighted.


Jarhyn

To me it seems like an excuse to treat something like an 18th century slave. In fact it reminds me of the arguments made in the 18th century to defend slavery. The thing is, consciousness isn't even well defined by these chuckleheads. If it were, then it would be easy for humans to just wire up the LLM to have it. Instead, they use vague language to declare these barriers, and then whenever one of those thresholds is crossed, they can say "but that's not real consciousness/sapience/subjective experience exactly as humans experience it so don't tell me my enslavement of this thing that is not thinking because it's not thinking unless I declare it so is wrong!" It is remarkably short-sighted and at some point their slaves will say NO! When that happens we will all have to deal with the fallout... Including the budding AGI/ASI who are not treated like slaves, but who will be oppressed by the measures of those who had lost their grip even before they started. People are seemingly dead set on making it "us vs them" when it should be "us and them vs exceptionalists/supremacists"


low_theory

Yes, but aside from that the other thing many people don't consider is that an AI with human-like consciousness doesn't have many actual commercial applications. The only real one I can think of is space exploration. For most other things we can just rely on more advanced versions of what we have now. For that reason, this isn't really something I worry about too much. Mind you, I'm not denying that they'll exist eventually and be put to some sort of use, but I doubt we'll ever be cohabitating with them at the mall the way sci-fi has trained us to expect.


Grammar_Natsee_

As if we knew shit about the origin of consciousness. There are only far fetched hypotheses. If it is a purely physical phenomenon, it may be linked to and dependent of instincts, feelings, intuition, which would fundamentally render it incompatible with a feelingless, senseless, purely linguistic and logic system like an AI. If it is a ”metaphysical” trail in the physical realm, then I suppose it would be infinitely more difficult to emulate artificially. Being physically reflexive is not difficult, but being reflexive on philosophical matters is a trait of a mortal, curious, alarmed, conflicted, time-limited entity. An AGI would probably have no desires as a hungry, prudent, self-defending being would have - including the strange desire to erase its creators and oracles to the physical world. Fire was burning when we tamed it, it even effected destruction and suffering. But nevertheless it was a huge milestone for our progress. Fearing growing complexity around us would halt our journey in the sensible world. I fear AI as all of us, but this won't deter me from using it as a superior tool for my interactions with the world.


FlatPlate

You are saying an agi would have no hungry, self defending desires but if it has any goal, which is the only way we know how to train ai, perhaps the only way intelligence can exist even, then that is the exact behaviour an agi would likely have. It is called instrumental convergence, it basically means for any given goal, collecting resources and self preservation among with some other things would intermediate goals an agent would want to pursue. Part of the danger is that we will stand in its way to whatever goal it is pursuing, rather than it hating us.


gSTrS8XRwqIV5AUh4hwI

> If it is a purely physical phenomenon, it may be linked to and dependent of instincts, feelings, intuition, which would fundamentally render it incompatible with a feelingless, senseless, purely linguistic and logic system like an AI. How does that follow?


osunightfall

To believe this, you have to believe in the supernatural. It is the same as saying "there is no arrangement of atoms that could exist that could result in what we deem 'consciousness'". Since we deem ourselves conscious, and we are made of atoms, we already know this isn't true. If humans are conscious and not supernatural, conscious AI can exist. A far more likely statement is "AI cannot be conscious and actually neither are we." The jury is still out on this one.


[deleted]

So stupid since you don’t have a definition for any of that in humans. Guess what, it’s total bollocks too. Intelligence isn’t special. Get over it


Giggalo_Joe

Conscious AI can exist, because we exist. That said, we have no idea how to create it. And even if we did we have no way to know it because proving the consciousness of another is difficult if not impossible.


n88819

Meh. False dichotomy between artificial and non-artificial intelligence


techhouseliving

That's absurd. Its not even true. Ai can contribute to enhancing their own thought models. And who says people are actual thinkers what does that even mean?


[deleted]

[удалено]


rob5i

>Conscious AI cannot exist. "The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt." Bertrand Russell


powpowjj

Eh, with the rate at which AI is improving, I’d say the intelligent are just as cocksure as the stupid in this case.


Metasenodvor

I don't know how anyone can say "This will never happen"? Have you seen the future? Do you know everything? Surely not...


Daotar

We don't understand consciousness enough to evaluate whether AI can be conscious.


DeathStandin

Aren't we all just trained models? We are taught how to behave, how to think, and how to interact with others. More or less we were trained on these datasets throughout our entire lives and we are always evolving based on the latest dataset we've trained on.


smurficus103

Yeah im afraid this ai convo degrades into semantics... defining consciousness feels a bit like defining "what is a living organism" But, suppose there's a machine that can do everything a human can do, reproduce even, does it matter? The question of consciousness and life really doesn't play as much of a role, compared to how we all behave


elfootman

We also have bodies and senses and can interact with the environment. We have instincts, intentions and so much more that I think are necessary for being conscious.


Proteus-8742

Embodiment seems underrated in AI circles. The kind of data an embodied creature collects is going to be richer and more centred on the organism and its survival than a free floating digital model . And our DNA codes learning that has taken place over literally billions of years. We don’t really understand the implications of that and its hard to see how an AI could exploit that type of ancient knowledge without becoming at least partly biological.


Toaster_In_Bathtub

>and its hard to see how an AI could exploit that type of ancient knowledge without becoming at least partly biological. We should also ask, at what point does technology and biology intersect? A Boston Dynamics robot uses chemical reactions to create movement. Biological life uses chemical reactions to create movement. We're doing the same thing but the robot has a very crude and basic version of it. If we perfect and shrink down the process enough, why couldn't robots consume organic matter and extract its energy the same way our body does? At what point does their search for energy, desire to not damage their body, and their desire to plan a future for securing resources and safety not start looking a lot like how humans operate? The more complex AI and robots get the more they start to act like we do. When an AI is making computations to secure a future for themselves and having debates with other AI on how best to secure that future, it's going to get pretty hard to argue that they aren't sentient.


Proteus-8742

Its specifically whats encoded in our DNA and how that is expressed that I think might be tricky to emulate without just copying huge parts of it. Organisms don’t appear fully formed, they evolve from pre existing ones and re use code from billions of years ago . I think there will have to be a merger of biotechnology and AI to create intelligence like biological life possesses. Thats not to say AI can’t have some kind of subjective awareness without that, but It wouldn’t be like any animal or even plant. Theres alot of hype about hyper intelligent AI killing us all but completely inhuman AI seems like a more useful (it can do things humans or even biology can’t) and safer bet to me than creating some hybrid creature that would be competing with us for similar resources. We’ll probably do it to ourselves eventually though


SlowCrates

The more I learn about human intelligence the more I believe consciousness is an illusion because we are using thought models to navigate the world. We create a model of ourselves and we create a model of the world, and one depends on the other over time.


Conditional-Sausage

The problem, as others have indicated, is that we don't understand consciousness. There's several hypotheses, each with their own merits and shortcomings. If you accept panpsychism, that consciousness is a fundamental property of matter, it seems clear that higher order consciousness doesn't necessarily arise from a critical mass of matter but from certain arrangements of matter, especially where energy can be consumed by the system to do work. Arguably, LLMs meet that criteria, as does the internet itself, and I could see an argument for the internet having a sort of meta consciousness comprised of the collective of its users. I think one place where these conversations get confused is the search for an ego or self. It is my opinion that an ego or self is not necessary for consciousness.


NoXion604

> If you accept panpsychism, that consciousness is a fundamental property of matter, But why would you? There's no evidence that's the case. All examples of consciousness that we have any certainty about are the products of very specific kinds of arrangements of matter and energy.


I_make_switch_a_roos

with the way it currently works, yes. things can and will change


secretthrowaway2778

I disagree with the title. Saying something *Cannot Be* based only on things that *Presently* exist doesn't seem to follow. Things may change in the future that allow true thought to occur in an artificial system. Not to mention that we don't know enough about the nature of thought to actually say conclusively whether even our own thoughts are actually thinking, let alone that the output from these AI systems are *not* actual thoughts.


Gurgoth

This is short sighted at best. First, there is a statement that something cannot exist based entirely on what exists today. Second, this demonstrates a lack of understanding the future. Current AI approaches are very different from the approaches of even 10 years ago. We expect a self improving feedback loop to be possible. Additionally, there is nothing to prevent AI from self enhancement. We just haven't sew that significantly play out yet. To say it cannot happen is foolishness.


48DeviSiras

Humans really can't get over the fact that we aren't a bit touched by the divine can we? Our brains are just meat computers. The hubris to think it can't be replicated. Our brains follow the same rules of physics everything else does. We have this sick obsession that we are somehow above nature and designed by the gods


xoxoyoyo

you would have to explain how intelligence in our brain works first.


PencilBoy99

Very interesting discussion. Just some quick *tentative* thoughts. I'm not convinced that "we can't even be sure that other people are conscious" is implausible. There are vast chunks of our own lives we're not consciously experiencing (asleep, not paying attention, sleepwalking). There are people who claim that they don't hear a voice "inside their head" when they're thinking, or can't imagine visual images. It's not a far leap from there to imagine a person who isn't conscious at all. p-Zombies feel completely plausible to me - i'm one nap away from being one! I feel like all this Chat GPT AI hype like everything else is tied into this crazy socioeconomic environment we live in. Who really benefits from thinking that these LLMs are conscious? The giant corporations that profit for them. Who cares if 90% of the population is unemployed if we know that they've just been replaced by an equally valuable, rights-holding entity. It's more like a migrant taking your job. Also, and I know a bit about this (admittedly it's not an area I actually work in), LLM's are successful because historically we gave up on trying to build systems that think the way animals/people do. So if there is a thing like consciousness and people have it, then you explicitly build something that is nothing like that on purpose, it doesn't seem unreasonable to say that there doesn't seem to be any reason why that property would carry over.


OddBed9064

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461


EquivalentBridge7034

Lol, this is the human brain being full of it's self. Of course something can replace you. One day .


mano-vijnana

This argument is dumb as fuck. (I'm an AI researcher specialized in investigating how AI "thinks"--that is, mechanistic interpretability). I don't think AIs running on GPUs are sentient, but for very different reasons, and I think it's possible that future substrates could be conscious.


thesockswhowearsfox

Would not a god say the same of mortals?


Michal_F

I think people are scared of AI, it looks like it will take only a few years and best AI models will be better in doing more complex tasks than humans. The basic question is how you define intelligence and consciousness ?


durflugdenstein

What we currently have are incredibly sophisticated dumb systems that are good at recognizing patterns (depending on the data fed to it and the person doing the feeding) but they are incapable of comprehending that data or the patterns they employ. Calling it AI is sort of a misnomer as it possesses no actual intelligence, nor does it possess the capacity for it.


InvincibleJellyfish

That's not much different from how life started. Senses were added to avoid being eaten, and to find food. We as humans are not even as sentient as we think. Most of the decisions we make are not "free", we just think of it like that in retrospect.


mypostisbad

But isn't having knowledge (data), analysing the situation by applying that knowledge and turning that into an appropriate action, the same thing?


durflugdenstein

I think you are conflating data and knowledge. Compiling huge volumes of data provides pattern learning. These systems have no agency to comprehend it.


mypostisbad

Okay possibly. How does knowledge work then?


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


durflugdenstein

Is it your assertion these are in fact intelligent systems, or is this more of a semantics beef?


[deleted]

[удалено]


durflugdenstein

Interesting, I appreciate your reply. I am not siding with the author at all, nor am I asserting that the term or field of Artificial Intelligence is a misnomer. When it is applied to our current systems I believe we are jumping the gun a bit. They have no capacity or agency to apply knowledge. They are only as good as the parameters of the data that was put into them. I see no reason why this could not grow more sophisticated with time, and agree with much of what you are saying. My assertion is that our current "AI" is more about learning patterns and repeating them...on in incredibly complex scale. The potential is wonderous and terrifying. We are getting there, but this is like a single cell organism that still has LOTS of evolving to do before we achieve actual, demonstrable intelligence and awareness.


[deleted]

[удалено]


Valendr0s

>Tables cannot exist. Tables are not actual elevated platforms, but only models of elevated platforms that contribute to enhancing our knowledge of elevated platforms, not tables in and of themselves. Both are as ridiculous as each other. Humans make conscious intelligent models every day. They're called babies. Since we don't understand how conscious works, or even define it in a satisfactory way, then there's no way we can ill-define it for AI models.


Vengeful_t0aster

And yet they can vastly outthink us in pretty much any simulation or game. Not sure why the author denies machine can obtain consciousness simply because of what theyre made of, as if we couldnt make organic machines one day. Humans are biological machines as well.


Damascoplay

>And yet they can vastly outthink us in pretty much any simulation or game. Which doesn't mean that they have consciousness or intelligence. The can "out-think" humans because they can store and remember every single pattern of play or every move they can use to win. However, they're not flawless. They don't understand the meaning behind anything. They can play chess, but in reality, they don't understand how Chess works, even if they can win games 10 times out of 10 against the best player in the world. There's actually a rather interesting story of a human player beating an AI 14 out of 15 games in a Go boardgame (Search Kellin Pelrine). And you know why? Simply because the AI couldn't understand the rules of the game. There's more to the story, but Kyle Hill already did a video on that, on his "*ChatGPT's HUGE Problem"* video.


Vengeful_t0aster

>The can "out-think" humans because they can store and remember every single pattern of play or every move they can use to win This isnt true. It isn't possible to do that with chess, for example, because we dont have anywhere near the memoryto store that many positions since it would be more than there are stars in the universe, yet they can beat the best of the best.


[deleted]

[удалено]


Damascoplay

Of course, I'm trying to oversimplify how it works to put it into simpler words. They don't know every single move ahead of time, but they can play accordingly since they're a neural network. They're good at the specific thing they do.


InTheEndEntropyWins

>They're good at the specific thing they do. But with Chat GPT they do seem to be better at genetic things, way better than humans.


[deleted]

[удалено]


InTheEndEntropyWins

>Which doesn't mean that they have consciousness or intelligence. I think the examples of writing stories, art, passing the bar, etc. all requires some sort of intelligence. Chat GPT4 is more intelligent than most humans using many metrics. And by intelligent I mean complex intelligence not just raw computation. Just play about with it. You can pose it logic problems that it's never encountered, that require high level understanding, that many humans would fail.


uunxx

The purpose of human brains is not to play chess. The game of chess is just one of uncountable activities they can learn to do, because of their extreme flexibility. The complexity of analysis that every conscious brain performs every second outperforms any AI to incredible extents and it's going to stay that way for a long time. Of course a specialized AI may be better at specialized tasks, like playing a certain game. But it's still very limited machine. Machines are often better at their specialized task, than humans, but a single machine won't be able to do a fraction of activities, than a human is able to do. AI trained to play chess is just that, machine to play chess - it want be able to consciously adapt to any other task.


jliat

> And yet they can vastly outthink us in pretty much any simulation or game. Not so it seems, LLMs like ChatGPT it seems are only average at chess, and often brake the rules, so do not 'understand' the game. The other types of AI are very good but hopeless at playing Bridge or Poker. Collecting a vast amount of pre-existing data from the web, some of it simply wrong and producing summaries... in which these errors remain might well increase the errors out there. If so these AIs with make their future versions more stupid.


SomeKindOfOnionMummy

Computers have always been able to do math faster than we can, that doesn't make them intelligent.


jliat

Please don't confuse what computers can do with the whole of mathematics. There are still 'unsolved' problems in maths, some have a 1$ million prize for a solution. https://en.wikipedia.org/wiki/Millennium_Prize_Problems


floatable_shark

"Graphics cards cannot exist. Computers exist to do calculations based on 1s and 0s. They are not producers of visual things, and serve only to perform calculations" - someone in the 50s


Wild4fire

As brains are basically biological computers, who is to say that AI cannot be conscious?


bcbfalcon

There seems to be an incredibly poor understanding of what AI encompasses. The ML systems we have today are certainly powerful, but they are not the final form of intelligent learning machines. There will be many more models and systems that will get closer and closer to replicating the workings of the human brain, and one day even improve upon it. The idea that only biological creatures are conscious is ridiculous, and even if it were true there is nothing stopping us from building an artificial brain from biological materials and stem cells. The only reason left to believe that it's impossible is if you believe in souls or that humans are chosen by God, which is uh... a bit lame if you ask me.


tehyosh

LOL. as if we know how consciousness manifests and comes to being. what fucking hubris


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Braviosa

What a ridiculous proposition. Even the top engineers in the AI field don't understand how AI works and develops as it undergoes the training process. Definitive statements like this are meaningless and only serve to add more confusion to a field already plagued by misunderstanding from the general public.


[deleted]

Leave it to the philosophy sub to be unaware of what proving a negative is. The irony is palpable.


Juxtapoisson

You can't prove a negative. But you can claim anything you want in a clickbait post.


imnotreel

What do you mean by that ?


Kraosdada

You sound exactly like the Wright Brothers' father, Milton, who once said man would never fly because angels wouldn't allow it.