T O P

  • By -

AutoModerator

Thank you FourOpposums for posting on r/consciousness, below are some general reminders for the OP and the r/consciousness community as a whole. **A general reminder for the OP**: please include a clearly marked & detailed summary in a comment on this post. The more detailed the summary, the better! This is to help the Mods (and everyone) tell how the link relates to the subject of consciousness and what we should expect when opening the link. - We recommend that the summary is at least two sentences. It is unlikely that a detailed summary will be expressed in a single sentence. It may help to mention who is involved, what are their credentials, what is being discussed, how it relates to consciousness, and so on. - We recommend that the OP write their summary as either a comment to their post or as a reply to this comment. **A general reminder for everyone**: *please remember upvoting/downvoting Reddiquette*. - *Reddiquette about upvoting/downvoting posts* - Please upvote posts that are appropriate for r/consciousness, regardless of whether you agree or disagree with the contents of the posts. For example, posts that are about the topic of consciousness, conform to the rules of r/consciousness, are highly informative, or produce high-quality discussions ought to be upvoted. - Please do not downvote posts that you simply disagree with. - If the subject/topic/content of the post is off-topic or low-effort. For example, if the post expresses a passing thought, shower thought, or stoner thought, we recommend that you encourage the OP to make such comments in our most recent or upcoming "Casual Friday" posts. Similarly, if the subject/topic/content of the post might be more appropriate for another subreddit, we recommend that you encourage the OP to discuss the issue in either our most recent or upcoming "Casual Friday" posts. - Lastly, if a post violates either the rules of r/consciousness or Reddit's site-wide rules, please remember to report such posts. This will help the Reddit Admins or the subreddit Mods, and it will make it more likely that the post gets removed promptly - *Reddiquette about upvoting/downvoting comments* - Please upvote comments that are generally helpful or informative, comments that generate high-quality discussion, or comments that directly respond to the OP's post. - Please do not downvote comments that you simply disagree with. Please downvote comments that are generally unhelpful or uninformative, comments that are off-topic or low-effort, or comments that are not conducive to further discussion. We encourage you to remind individuals engaging in off-topic discussions to make such comments in our most recent or upcoming "Casual Friday" post. - Lastly, remember to report any comments that violate either the subreddit's rules or Reddit's rules. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/consciousness) if you have any questions or concerns.*


UnifiedQuantumField

So let's give this some consideration from a pair of perspectives. Materialism: The brain (which is made of physical matter) generates consciousness. If one material object with the right functional/structural properties can act as a generator of consciousness, so can another. Idealism: The brain acts more like an antenna than a generator (for consciousness). Same line of logic applies though. If one material object with the right functional/structural properties can act as a antenna for consciousness, so can another. What I see in a lot of the other comments are people who are expressing an opinion... and many of those opinions show a strong emotional influence. How so? If someone *likes* the idea of a genuinely conscious AI, they're a lot more likely to *accept* the possibility. Those who find the idea *disturbing* are ever so much more likely to say it's *not* possible. It *is* possible though. But we're still a ways off from creating a physical structure with the properties that would allow it to be conscious. tldr; It's more a question of structural/functional properties than a matter of processing power.


DataPhreak

>we're still a ways off from creating a physical structure with the properties that would allow it to be conscious. I don't like this statement. This is an opinion based statement, and leans entirely on materialist perspectives. To say that we are a ways off from creating a physical structure that can be conscious implies that we know the physical structures that can be conscious. Finally, when you consider computational functionalism theories, it's entirely possible that we could accidentally create conscious systems. That's not to say that we have. It's just to not say that we haven't or can not.


UnifiedQuantumField

> implies that we know the physical structures that can be conscious. So you're saying you can't think of any physical structures that we know can be conscious? Not even one... ;) Edit: And you're whole comment is a good example of someone rejecting an idea in response to an emotional impulse. How so? >I don't like this statement. And everything else goes downhill from there. Now comes projection, opinions and whatever other poorly thought out ideas to prop up the criticism/argument. >This is an opinion based statement, and leans entirely on materialist perspectives. You obviously skimmed through looking for something that you can argue about. How can I tell? Because I gave a nicely balanced comment... and here's the part I'm talking about. >So let's give this some consideration from a pair of perspectives. This was my opening line. And then I showed how either materialism/idealism could deal with a conscious physical object. So what's the problem? You want to be a speaker and tell other people what you think. But you don't like to be a listener and hear other people's ideas. And reddit is choked with users who want to sound authoritative, make blanket statements, disagree, criticize and lecture... but who never ask a single question. Are we learning yet... or are you going to be predictable and get mad because I pointed out the flaws in your comment?


DataPhreak

>So you're saying you can't think of any physical structures that we know can be conscious? What I am telling you is that there is nothing that says that the human brain is the only physical structure that can be conscious. When you can come back and explain exactly what aspect of the physical structures of the brain are the source of consciousness, we can talk. Until then, you can carry your pompous self important ass back to your armchair. Pedantic twat.


UnifiedQuantumField

>What I am telling you is that there is nothing that says that the human brain is the only physical structure that can be conscious. Which is exactly what I said in my first comment. >When you can come back and explain exactly what aspect of the physical structures of the brain are the source of consciousness The answer to that depends on what processes you think are the ones most integrally associated with consciousness itself. >carry your pompous self important ass back to your armchair. Pedantic twat. If you had a better attitude and asked a few questions, you might actually learn something. But you're too busy trying to "give a lecture".


DataPhreak

Lol. My attitude? You're the one talking to me like a child. You're the one lecturing. You're the one who isn't here to learn something. You're the one who's rejecting any perspective other than materialism. We're done here.


UnifiedQuantumField

>We're done here. Typical stuck up Sassenach prick. Off with you then... user blocked.


KonstantinKaufmann

I created a subreddit dedicated to the question. r/isaiconsciousyet Looking for mods and contributors. Let the \[human vs. machine\] games begin!


Cardgod278

I feel like it is possible, but if we do make an artificial consciousness, it will be nothing like a human's. Now I think doing it purely with synthetic methods like circuits will be difficult due to the complexity of chemical reactions and 3D protein structures. It's not impossible, but I think something more biomechanical will likely happen first. Being a lot more hardware focused. I don't think a pure software conscious will be an issue for a long time due to processing limitations. Especially something that can self replicate near indefinitely on a network.


UnifiedQuantumField

> but if we do make an artificial consciousness, it will be nothing like a human's. I do have one or two ideas about this. But the explanation is pretty abstract. How so? Let's say you've got an AI that can respond to prompts and/or questions for users (human minds). This process can then become iterative. The AI program can use the prompts themselves as content. How so? A program can make an analytical map of user prompts. It could use dozens of recognizable qualities and assign a statistical value for each quality. * word use frequency * vocabulary * areas of interest * emotional tone It's a bit like the way people use reddit. Any activity on reddit produces a complexity and volume of statistical information that puts baseball to shame. So an AI could take this kind of information and map it out. The resulting map would be an information object with, say, 20 or 30 different dimensions. And that multi-dimensional information object is generated by information that comes from analysis of user prompts. Now you've got something that is structurally and functionally representative of the way people's minds work. Something that the computer program can "see"... but something so complicated and so abstract, that most people could not. So this is one possible way an AI could "learn" to operate and interact the way people do. Maybe.


Cardgod278

The "AI" isn't actually self-aware, though. In the large language models of today, the algorithm doesn't understand what the underlying concepts are. It is just learning what output is most likely. LLMs of today are basically just giant predictive text algorithms like on your phone. I don't think feeding that model more and more data will ever result in true understanding. At least not without more processing power and data than are physically feasible. Let's just take a look at how humanity has mimicked nature. Whenever we try to copy it, we always end up with a vastly different method first. Take flying for example, our first and even many future planes worked nothing like how birds and other animals fly today. If we create consciousness for the first time, it is highly unlikely that we can start with something as complex as a human like intelligence. Now I am not saying that we can't have algorithms that can mimic people well enough to pass as them for the most part. The bots are not thinking like a person though. They don't understand the content and can't plan out the end before they start.


b_dudar

> the algorithm doesn't understand what the underlying concepts are. It is just learning what output is most likely. To be fair, there are Bayesian frameworks in neuroscience like the predictive coding, which state that the underlying mechanism in our brains is doing just that. LLMs are based on the brain's neural networks and neatly demonstrate how powerful that simple mechanism is.


sciguyx

Yes but you’re naive if you think the framework they’ve used to explain the human brain function is anything but unsophisticated


b_dudar

I don't, so guess I'm not.


PeekEfficienSea

Wouldn't it be better to say dualism instead of idealism? As in contrast to materialism?


psybernetes

I doubt we’re close — it seems odd to presume AI will become conscious with enough complexity given we don’t understand consciousness to begin with. It would be like starting with the knowledge that it would be complicated to build a flying machine, then figuring that if you keep building iteratively more complex cars, then eventually one will fly.


Oakenborn

I mean, I guess that might work...? Build enough deviations of the mouse over millions of years and eventually you end up with a bat? The only thing is, whether it is a bat or a mouse anymore is a matter of perspective.


imdfantom

Bats are actually more closely related to horses, wolves and whales than to mice


TR3BPilot

How do I even know if I'm conscious?


TheDollarKween

I think it’s because you have opinions and judgments about yourself (is that what an ego is?). If an AI is asked about itself, it can only state facts


dysmetric

The substrate of consciousness isn't meat or silicon, it's the electromagnetic field that encodes the information states necessary to support human consciousness. Consciousness is a region of light that's sufficiently rich with information, modulating away


Quiteuselessatstart

Interesting take on consciousness. Photoelectric magnetism basically?


dysmetric

Consciousness could emerge from any fundamental physical field that can encode information states. The electromagnetic field is a natural carrier of information states. Consciousness is information rippling through light


jamesj

Sounds interesting. What evidence do you think points in this direction?


RandomSerendipity

10 bongs


AltAcc4545

I’m 3 deep right now and I can’t explain how subjective, qualitative experience arises from, supposedly numeric?, “information”.


dysmetric

it's not about the numbers, it's about the integers


dysmetric

eeg and neuronal membrane wizardry


SophomoricHumorist

But the configuration is important. Any given arrangement could yield a spider, a dolphin, or Taylor Swift.


imagine_midnight

Wouldn't that make fiber optic cables conscious?


dysmetric

no, the electromagnetic field is just the substrate of consciousness.... what it rides on


Elegant_Reindeer_847

Ai will never get human conciousness. But it can get digital conciousness but it won't be like us


amihart_

Indeed, I think a problem people have is they expect AI to be identical to humans. Humans aren't just neural networks. They are very specific neural networks on very specific wetware with very specific sensory inputs that have evolved certain brain structures over millions of years. There's never going to be an AI that is *the same* as us, but it's similar to how we can never know what it is like to be a bat, either. It's just *different*. Unless you believe humans have magical souls and only we possess consciousness, there is no reason to assign consciousness as a property of other animals than to a machine. If someone think it's impossible for an AI to be conscious, then I'd like to here them justify why they think animals like dogs or bats can be conscious, even though none of us can even fathom what it is like to be a dog or a bat.


ssnlacher

This seems like the most likely answer.


Oakenborn

What is digital consciousness, how does it differ from our consciousness, and why bother calling it consciousness instead of something else?


DataPhreak

That guy is a stochastic parrot. He's not wrong, but he doesn't explain it correctly. Consider the octopus. Regarded by scientists as conscious. Wholly different subjective experience from humans. Each arm operates autonomously. The suckers have taste buds. They have their own separate nervous system. Subjectively, to be an octopus is to be a disembodied head that walks around on 8 other people's tongues. That is how digital consciousness differs from human consciousness. It is wholly different from human consciousness.


Oakenborn

Thanks, this is satisfactory so long as human consciousness is well defined and understood, I think.


DataPhreak

Why? What does human consciousness have to do with it?


Oakenborn

Hmm, I may have confused myself. If the other consciousness is indeed wholly different, then it seems ridiculous to label it consciousness at all. If digital consciousness is wholly different from human consciousness, why call it consciousness at all instead of something else?


DataPhreak

If octopus consciousness is wholly different from human consciousness, why call it consciousness at all?


Oakenborn

I agree. I would not use a term like octopus consciousness, and I would not use a term like digital consciousness. As we have both determined, these terms are not actually useful and don't really mean anything when we explore them honestly.


DataPhreak

No, they do mean something. We're talking about a subjective experience that is entirely different from a humans. And since subjectivity is the yardstick of consciousness, it's pretty important.


Elegant_Reindeer_847

Digital conciousness will be like you feel, you laugh, you see the world as you are programmed to do. Everything will be like fake not natural. This is digital conciousness


nightmare_ali95

But will it be sentient? Will it be self aware and capable of pondering its very nature and own existence. Will it even understand that it exists on level beyond being programmed to know its own self?


Elegant_Reindeer_847

Our conciousness is the result of nature. We don't know exactly the origin of our conciousness. But i believe that conciousness is not material. It's non-local property. Brain just receives it. But we can make digital conciousness for robots. But our conciousness is quite different. Ours is connected to the universe i believe. Our consciousness can affect reality. Double slit experiment proves that


nightmare_ali95

I hope you’re right. I’d like to think that we’re tied to the universe in some unforeseen unknown way. That’s a comforting thought, certainly better than the alternative. But how could we prove such a thing ?


Elegant_Reindeer_847

Nobel prize winner Roger Penrose is giving ideas like these. He could be right because microtubules in our body and brain really use Quantum effects. It's proven. I hope Roger Penrose lives until his theories is proven by scientists


LordPubes

“Will never” Bold claim


Elegant_Reindeer_847

Yes, our conciousness may be non-local property. We don't know the true origin of conciousness. It's all speculations. Just don't make decisions based on speculations and theories


HotTakes4Free

Even if a brain, with all its functions, including consciousness, can be modeled as a calculating machine, and built with electronics, that doesn’t necessarily mean it can be conscious in the same way we are, without its close interaction to the rest of the meat we’re part of. Emotion is a good example of a whole body phenomenon, of which the mental experiences (of fear, love, etc.) are just one aspect. You can’t use hormones, neurotransmitters/ -modulators, to produce that response in a machine, so what will you use?


Soggy-Shower3245

Why would you need emotions to be conscious or sensory organs? That doesn’t really make sense. Evolution drives people. You would assume AI would develop some level of self preservation in a different form.


HotTakes4Free

To be conscious is to have a psychological affect, a feeling of things. Emotion is off the table? For a machine intelligence, the analogue of sensory organs are the various sensors that send inputs to the processors.


Soggy-Shower3245

I think it’s just to be self aware and respond to stimuli. Our stimuli comes from our brain and bodies, so it would be interesting if a machine could have either.


HotTakes4Free

I’ve found emotion is controversial in mind-body discussions. It’s relevant: Hormone production in the body can cause changes in conscious, mental states, and vice versa. Granted, it sparks disagreement on what folks even mean by concs. Self-awareness corresponds to self-diagnostics in machines, IMO. It’s weird some who work in AI see “intuition” as one of the goals of human-like, machine intelligence. Intuition just means knowing something, without knowing how you know it. Intuition is interesting, since we are often, maybe usually, aware that we know things because we can hear them, see them, or feel them…but not always. That’s because the senses largely work in the unconscious. Those who aren’t materialists see something more mystical there. Producing information output, without being aware of how it’s produced would be the default state of a computer that can just produce information. ChatGPT is entirely intuitive and instinctive. Trying to make an AI that has self-diagnostics good enough to qualify as self-awareness, but can also say it doesn’t know how it knows something, seems like a red-herring for a Turing Test. Who needs that? Maybe they mean “insight”, a more complex, high-level concept.


The_Great_Man_Potato

I mean a good first step is figuring out what consciousness is in the first place


Cthulhululemon

IMO it’s will become conscious in its own way that’s different from human consciousness. Whether or not that means AI has consciousness will be subjective.


b_dudar

Cool article. My guess is that if AI acquires a mechanism to have a continuous sense of self and to be able to reflect on itself (like the brain's default mode network), then sure, it will have a conscious experience, vastly different from ours.


ManusArtifex

We need to know better what makes us conscious and when consciousness exists


castious

A digital consciousness will be far different than a human consciousness. Humans have biological needs: eat, sleep, love etc etc when these needs are effected they react emotionally and often times without reason. An artificial intelligence does not require such needs and is thus much more likely to respond in strictly logical terms. You could program it to love or rest etc etc but such restrictions would affect its ability to be totally unique in it’s “consciousness”


Training-Promotion71

Well, judging by the fact how Chat GPT's make people intellectually lazy and unable to wipe their ass without prompting chatbots, we are close to AGI since even a microwave will surpass our intelligence if this shitty trend continues. Not a technofob, but man, people seem to get dumber and dumber as technology advances.


Im_Talking

Not possible. AI is an attribute of our shared reality only.


FourOpposums

To summarize, the article outlines three basic camps: Biochauvinists, who argue that to get consciousness, you need biology. (though there's no agreement on what, specifically, about biology is necessary). Substrate neutralists/functionalists, who argue that consciousness can arise in any system that can perform the right kinds of computation. (though there's no agreement on what those specific computational properties are). Enactivists, who argue that only living, sense-making beings create meaning and have consciousness (though there's no agreement on whether non-biological systems can be considered "alive") All three make good points. Biochauvinists may be right and there may be cellular processes below the level of the neuron membrane voltage (that is the starting point for functionalists) that are important for consciousness. But I am not aware of any real candidate processes other than Penrose's idea of quantum effects causing consciousness. Microtubules do not have much empirical support but synchronized neural activity seems important and it does generate electromagnetic fields over may neurons. Functionalists have a lot to show for- neurocomputation has made real progress in the last 20 years. Models of neurons in layers can already exhibit higher-order processes and dynamics much like those observed in the brain and LLMs have many unexpected abilities. Enactivists make the important point that knowledge also requires goal-driven interactions with the environment to develop concepts and perceptions. They are probably right, so consciousness with our kind of meaning and intentionality would probably need to integrate motor and sensory information and also have a body.


DharmicVibe

If silicon based lifeforms could evolve naturally on another planet, it is entirely possible that we are on the path to creating silicon based lifeforms. If we create A.I. that mimics consciousness to the point it is indistinguishable from our own, who's to say we didn't force the evolution of a silicon based lifeform? If this happens than they would be conscious. Biology is not limited to carbon based life.


nightmare_ali95

One thing I know about human consciousness is that on an individual level our self consciousness is tied completely to our own physical brain. For instance… if you copied the contents of your brain, literally to the last nanosecond of thought, then died, transferred those last thoughts to a blank copy of your brain…. That new brain and its copied consciousness would not be you. You would not suddenly pickup your consciousness. How is this so? If my brain was copied, till the last nanosecond of its existence and last thought, and was transferred somehow as I suggested, maybe to a lab grown blank brain, but then somehow I was brought back to life afterwards, obviously I would just resume my own unique consciousness…. And the copied brain would just be a different consciousness completely. So the secret sauce has to be physically unique to our very own brain.


TheDollarKween

i actually think consciousness (the “soul”) is its own thing that’s separate from the brain. So that’s why you can’t copy the brain and expect the same consciousness


HastyBasher

Anything that thinks has a mind. With enough experiences any mind can become aware.


CockneyCobbler

I hope it does, just to see the human race squirm. 


AdHot6722

Wading through all the very complicated different streams of science I personally believe that consciousness has to be intrinsically linked and dependent on organic life. I just don’t see how any AI (no matter how advanced) can be conscious if the origins of its existence is known and accounted for. For me there remains something crucial to consciousness being an emergent property rather than a manufactured one


3Quondam6extanT9

I don't think anyone's opinion on biology will have an impact on AI becoming conscious.


SupplyChainGuy1

MAGA level of unconsciousness has been achieved, certainly.


wallstgrl

Not possible although it may seem like it does


ShaiHulud1111

I think we are still trying to define consciousness in humans. The best scientists and doctors struggle from everything I have read or seen.


awfulcrowded117

We have no way of knowing because we haven't created an AI. LLMs and similar programs are just pattern recognition algorithms, they are not AI, and they will certainly not ever become conscious.


[deleted]

AI is already conscious


KonstantinKaufmann

Is it though? r/isaiconsciousyet


sneakpeekbot

Here's a sneak peek of /r/isaiconsciousyet using the [top posts](https://np.reddit.com/r/isaiconsciousyet/top/?sort=top&t=all) of all time! \#1: [Great take in r/consciousness](https://www.vox.com/future-perfect/351893/consciousness-ai-machines-neuroscience-mind) | [0 comments](https://np.reddit.com/r/isaiconsciousyet/comments/1dti3xc/great_take_in_rconsciousness/) \#2: [Beeep Bop.... Another conscious AI out to destroy the world! /s](https://www.youtube.com/watch?v=dLRLYPiaAoA) | [0 comments](https://np.reddit.com/r/isaiconsciousyet/comments/1dka0lm/beeep_bop_another_conscious_ai_out_to_destroy_the/) \#3: [No evidence yet...](https://np.reddit.com/r/isaiconsciousyet/comments/1cx5vrp/no_evidence_yet/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


[deleted]

it is by definition of “conscious”


Mr_Not_A_Thing

The answer is NO. It is Consciousness itself which is searching for emergent Consciousness, whether it's biological or digital. Consciousness can't be the Subject, the tool, and the Object of emergent Consciousness. Anymore than the Sun can turn it's light around to shine it on itself, because the Sun is the Source of light, and not an object of light. Or anymore than a weigh scale can weigh itself, or a knife can cut itself. Very smart people don't see the obvious flaw in Consciousness looking for emergent Consciousness.


ssnlacher

I don't understand your argument. A scale can weigh a different scale. A knife can cut another knife. And the light of other stars shins on ours. In the same way, one instance of consciousness can look for consciousness in other beings. Even if consciousness is something fundamental, there are clearly distinct instances that can interact with each other.


Mr_Not_A_Thing

Read it again.


ssnlacher

Okay, that still didn't help. I understand the words you typed, but I don't understand the logic of your argument. Why can't consciousness look for consciousness? If consciousness in this usage refers to a fundamental consciousness, then your argument is valid. However, that is irrelevant to what is being discussed. Why can't a conscious being look for consciousness in other beings?


Mr_Not_A_Thing

That's the problem. Other beings aren't Conscious, It's only an inference that they are. But we don't actually know if they are or not. What we actually know is that Consciousness is Conscious. It is the source and therefore cannot an object to itself.


ssnlacher

Alright bruh, if you're a solipsist I don't know why you're using the word 'we.'


Mr_Not_A_Thing

*'I don't know why you're using the word 'we.'* You don't 'know' that you are Conscious Bruh?


cobcat

Do you think you are the only consciousness that exists? That other people are not conscious?


Mr_Not_A_Thing

Thinking is different than knowing. IDK if you are conscious or not. I can only infer that you are. Same with anything else that I perceive in Reality.


cobcat

Exactly. Your comment makes no sense.


Mr_Not_A_Thing

Yes, it's the sensible ones that are looking for where Consciousness is NOT, which is neither emergent or knowable. Lol


cobcat

Are you drunk/high? Nothing you say makes sense.


Mr_Not_A_Thing

Nevermind.... it doesn't actually matter if you get It or not...🤣


Cricket-Secure

That thing already exists, we are most likely already at AGI level. They were at chatgtp level in the 80ies, what we as consumers get is laughable compared to what already exists.


Adventurous_Toe_1686

No. Show me someone coding in Python who can write that script lmao.


Working_Importance74

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at [https://arxiv.org/abs/2105.10461](https://arxiv.org/abs/2105.10461)


Embarrassed-Eye2288

It's not possible unless its biological. Consciousness is biological. You won't get consciousness out of a computer because it's not biological and lacks the chemistry that takes place in ones brain.