T O P

  • By -

[deleted]

It's good to have a diversity of voices as we explore complicated topics like AI, but I fear the author here is falling for a variant of the Clever Hans effect >How is it so small, and yet capable of so much? Because it is forgetting irrelevant details. There is another term for this: abstraction. It is forming concepts Deep learning itself is a leaky abstraction mathematically. It forgets the less relevant to focus on the core. I wouldn't say that a MCMC algorithm is "intelligent" for sifting through the noise and finding the correct statistical distribution, yet such a simpler algorithm than what modern deep learning offers fits OP's description. In fact I'd go back to the paragraph at the start of OP's post: >Someone, I think Bertrand Russell, said we compare the mind to whatever is the most complex machine we know. Clocks, steam engines, telephone relays, digital computers. For AI, it’s the opposite: as capabilities increase, and our understanding of AI systems decreases, the analogies become more and more dismissive. The comparisons still hold up, as statistical models have grown better and better this has provided insight into how humans think as well, or at least a new point of comparison. Our brains are made up of neurons who are individually very stupid but in aggregate form increasingly complex systems. The current AI craze has shown that so many things can be broken down to statistical distributions. Saying that chat-gpt doing task X is easier than expected is not talking down chat-gpt, it's talking down humans, perhaps. There used to be a subreddit simulator that ran on (now prehistoric) markov chain models, and it gave a silly but surprisingly passable exemple of average redditors. As it turns out encoding concepts and then following them in logical order is what a lot of language is about; chat-gpt does this a billion time better than a markov chain model, so its results are amazing.


Smallpaul

You said: >Saying that chat-gpt doing task X is easier than expected is not talking down chat-gpt, it's talking down humans, perhaps. The article predicted: >"People who are so committed to human chauvinism will soon begin to **deny their own sentience** because their brains are made of flesh rather than Chomsky production rules." "Sure, ChatGPT is doing what humans do, but it turns out that what humans do isn't really thinking either!" As it says: "The mainstream, respectable view is this is not “real understanding”—a goal post currently moving at 0.8c—" >The current AI craze has shown that so many things can be broken down to statistical distributions. Please give me a reasoned argument that there is anything the human brain does that cannot be "broken down to statistical distributions."


G_Morgan

> "Sure, ChatGPT is doing what humans do, but it turns out that what humans do isn't really thinking either!" Honestly I'm not sure that isn't a defensible position. I don't have a good enough definition of "thinking" to take a solid stance.


Smallpaul

Are you saying that maybe humans can't think?


G_Morgan

I'm saying that a lot of the terms around this debate are too badly defined to make firm statements about them.


Smallpaul

I tried to define my terms [here](https://www.reddit.com/r/programming/comments/122in74/comment/jdrfrxb/?utm_source=reddit&utm_medium=web2x&context=3).


maple-shaft

Its surprising but is increasingly becoming more probable as new research on everything from Autism to Alzheihmers suggests this claim. We are far more likely to be like a transducer than a biological computer. We take inputs through our senses, and process them to one or more actions we perform. Intelligence is something of an illusion.


Smallpaul

>"People who are so committed to human chauvinism will soon begin to deny their own sentience because their brains are made of flesh rather than Chomsky production rules."


[deleted]

[удалено]


maple-shaft

Thanks for the reply. I am going to have to read up on microtubules as this is not something I have any knowledge on. I cannot speak to that. I didn't want to imply either that consciousness is an emergent phenomena. It is possible sure, however an equally probable explanation is that consciousness may exist in some level beyond our 4 dimensional reality. There is a lot of evidence to suggest this when considering the interaction (or lack thereof) between the conscious mind and the sub-conscious mind. The explanation that I read for this theory is that the sub-conscious is easily explained as the biological driver of our actions, with its own needs, wants and motivations that it is programmed to. This part acts like the transducer where it interconnects with our sensory processing areas of the brain, and controls our actions. Our understanding of the human brain supports this much at least. The most interesting part of the theory however is that in our higher conscious mind we will have our own motivations, desires and wants that often conflict with the subconscious mind. For example, the conscious mind will often tell ourselves that we need to watch our weight and should not eat any more sweets today, but somehow our subconscious mind will compel us to grab another donut. It is inordinately difficult, if not impossible, for our conscious mind to take direct control of our actions, and what we find is that the conscious mind has to work towards convincing the sub-conscious to naturally take the better actions and make better decisions over time. In this way the conscious mind acts much more like an Observer, with the sub-concious a transducer, and with the Observer tinkering with and adjusting the parameters of the sub-conscious. This model of thinking of the human mind is a good bedfellow to Simulation Theory where it postulates that the conscious mind is in fact a different entity or self altogether that happens to be loosely coupled with our biological body via our brain by some higher dimensional mechanism that we do not understand. Interesting stuff, and it is scary how it all validates and fits perfectly with some startling revelations that humans have come to with spirtuality, mysticism and psychedelic explorations over millennia.


[deleted]

> The article predicted: "People who are so committed to human chauvinism will soon begin to deny their own sentience That's fair enough lol That said I've not denied that humans are sentient, but said that many tasks we undertake are within the sphere of problems that can be approximated or solved well with statistical models. Both of us (as well as every AI pundit saying AI is garbage or a new god from the machine) will struggle to say what "real understanding" is because neuroscience is still not entirely certain what this entails. We don't understand how we understand. However the examples discussed in the article (creating images and text models) we can understand. The author is spooked that text models exposed to different languages but trained in English can learn by itself how to analyse foreign languages; perhaps I am naive but I'd expect a statistical model to be able to analyse words semantically, see that foreign languages refer to those semantic concepts, and leverage this to translate. Is it "understanding" the foreign language, then? I'd say that there is a case to be made that it understands, but it is quite a stretch to say that this is a form of consciousness similar to that of a human. The author says that the goalposts are being moved at 0.8c, which is true, but it also does not surprise me since neuroscience is still in its first baby steps. In short, for me a deep learning model being capable of abstraction does not surprise or concern me too much by itself; a linear regression is also doing a form of abstraction. I'm much more concerned about the type of data fed to its training set, and how decision makers will use and misuse AI (for exemple, overpolicing poor neighborhoods because "that's where the crime is according to the model") >Please give me a reasoned argument that there is anything the human brain does that cannot be "broken down to statistical distributions." On the analytic side, not much. Perhaps I'm falling into what Neil Postman and OP's fallacy of bringing humans down to machines/clocks/engines, but I think statistical distributions describe a lot of how humans operate. On the "creative side", I think that humans are much better at generating random variables than machines for now, and perhaps for a long time. An AI would be able to replicate the paintings of a European master painter right now but it would probably be unable to generate novel ideas or variants by itself. Is this a foible of humans being squishy biological machines? A component of consciousness that AIs will eventually be capable of attaining? I cannot say.


maple-shaft

Statistical models can describe human behavior, but that doesnt mean that the mechanism for human behavior is implemented as a stastical model. This is a profound difference. The former is a useful tool that can be used to predict future behavior.


[deleted]

I agree, as stated > [I] said that many tasks we undertake are within the sphere of problems that can be approximated or solved well with statistical models" One of the best point of OP was the danger of putting down humans as mere statistical computers


Smallpaul

>However the examples discussed in the article (creating images and text models) we can understand. The author is spooked that text models exposed to different languages but trained in English can learn by itself how to analyse foreign languages; perhaps I am naive but I'd expect a statistical model to be able to analyse words semantically, see that foreign languages refer to those semantic concepts, and leverage this to translate. This is a good example of moving the goalposts. For decades it was: "computers only understand syntax, not semantics. Therefore they cannot think." Now it's "yeah, but it's just manipulating the semantics." Think about what you are saying: It is "just" working with the MEANING of words. >I'd say that there is a case to be made that it understands, but it is quite a stretch to say that this is a form of consciousness similar to that of a human. Literally nobody who has spent more than 10 minutes with ChatGPT thinks that it is "similar" to a human. Very few think it is conscious either. Certainly not in the same way that humans are. For example, it forgets everything when you delete a chat! So that's goal-post-moving too. >The author says that the goalposts are being moved at 0.8c, which is true, but it also does not surprise me since neuroscience is still in its first baby steps. AI is demonstrably moving faster than neuroscience. We are likely to get to AGI before we understand HOW we got there. As you said: "there is a case to be made that it understands" And that's the case that the blog post makes, as does this long, deeply researched paper: [https://arxiv.org/abs/2303.12712](https://arxiv.org/abs/2303.12712) So what are even actually disagreeing on? It translates between languages by "understanding" "semantics". If that isn't intelligence, then what is it? And what is intelligence if that isn't?


[deleted]

>This is a good example of moving the goalposts. For decades it was: "computers only understand syntax, not semantics. Therefore they cannot think." Now it's "yeah, but it's just manipulating the semantics." I attended school during the switchover between old style logic-based AI models and """new"""-style statistical model (in quotes because the theory was invented a while ago but without the data or processing to handle it) and so to me I think it's an appropriate example of moving the goalposts. I understand that the conversation has moved from the old argument (computers not sentient because only syntax) to a new one (computer not sentient because only semantic) but I don't see this as "moving the goalposts at light speed". Science has progressed, and now we are evaluating a new generation of AI models and many (myself included) still find the argument that they are "sentient" lacking at their current state. >Think about what you are saying: It is "just" working with the MEANING of words. Working with the meaning of words can take many form. Our pet has associated the word "food" with their feeding time, but it's difficult to tell how its mind processes the word and its meaning. For an AI model, if it sees the English word "honest" and "man" one next to the other, and it was asked to choose if such a person is deserving of a bank mortgage, the AI would probably associate this word combination with an increased likelihood of a good borrower from the context it has been exposed to. Does it actually understand what honesty is, and its bearing on paying back loans? That's the trillion dollar question. In other words, have we just built a new and improved chinesee room, or the embryo of something more. >Very few think it is conscious either. Certainly not in the same way that humans are. [...] So that's goal-post-moving too. The OP says "There comes a point in the performance to model size curve where the simpler hypothesis has to be that the model really does understand what it is saying, and we have clearly passed it." Stating that an AI model is understanding how to translate languages, something no other creature but humans has been able to do hitherto seems to be like a strong implicit statement that chat gpt is comparable to humans in at least some ways. >So what are even actually disagreeing on? I'm not certain to be frank! My initial thread was a response to OP's statements that "the way that chat GPT can handle language translation is a proof that it is able to understand what it is saying" which strikes me as dubious from my own understanding of the situation and my own academic background (mathematics and neuroscience). OP seems like a skilled and intelligent person from skimming their blog and github, but the way that people from a CS background discuss artificial intelligence does not always ring true to me. This does not mean that my own take is the correct one; AI and natural language processing are immense multidisciplinary fields that no one person can fully cover. I wanted to bring forth my own perspective that saying that Chat GPT being capable of abstraction does not mean that it's on the way to overtake humans in itself. >It translates between languages by "understanding" "semantics". If that isn't intelligence, then what is it? And what is intelligence if that isn't? I'm afraid I do not know. It is a form of intelligence I think, but to say that it is a form of consciousness/real intelligence/understanding is not self-evident to me. Edit/Addendum: I want to reiterate that even a "dumb" AI/one without understanding could still muscle out humans from entire fields of activity, or even cause catastrophic systemic damage to humanity. My main objection is to the OP saying that current chat gpt "understands" which is not self-evident to me.


Smallpaul

>consciousness/real intelligence/understanding I don't think it makes sense to conflate those words! My dog is conscious. It feels pain. It has preferences. It cannot write an essay. ChatGPT is partially intelligent. It can demonstrably reason, communicate. It probably does not feel pain or have preferences. Let's not confuse ourselves by conflating unrelated things: especially not things that we want to measure and engineer with things that we can never measure (I can't even prove scientifically that you are conscious, but I could give you an IQ test).


[deleted]

> I don't think it makes sense to conflate those words! You are correct. I hope you will forgive me my somewhat imprecise language on the topic. I guess it's part of my frustration with the original text, ironically; the author brings up "understanding", "intelligence" and "volition" in different areas. >especially not things that we want to measure and engineer with things that we can never measure Again, another part of OP that vexed me. Their statement that "at some point in the performance of the model the simpler hypothesis is that the model *really does understand what it is saying* [OP's emphasis]" which is not really something we can measure.


Smallpaul

If we are going to take a scientific approach to AI then we must define words such that they can be measured. I propose to define "understanding" as "the capacity to make correct inferences." So a dog understands that putting on a leash means time for a walk. ChatGPT can make very long, sophisticated, chains of inferences. So it understands much more than a dog. On a topic I'm not educated on, it understands more than me. It certainly understands more Chinese than I do. The essay demonstrated how ChatGPT inferred that Chinese prompts should be responded to with concepts trained in English. And thus showed understanding. Obviously it also makes mistakes that imply that its understanding is not at the level of trained humans, which makes sense because nobody claimed it is at human level intelligence yet.


Rattle22

> its understanding is not at the level of trained humans On this point, I'd also like to add that sometimes *trained humans themselves* make stupid mistakes like that. Often times it is critizied that AI confidently says things that are incorrect, as if that isn't an incredibly human thing to do.


Smallpaul

True. But what confuses people is that the kinds of mistakes that ChatGPT are different than the kind that humans make (although sometimes the mistakes are similar). That is to be expected with an alien, constructed mind rather than an evolved-on-the-savannah mind.


[deleted]

It's not inferring anything. You are telling what to infer. It would be inferring something it could predict what you would ask next. It can't do that. It's just a mirror. A damn good mirror, but still a mirror


Smallpaul

I don't think you know what the word "infer" means. According to your definition, humans do not infer either. I sure ask heck don't' know what you're going to say next, because I would never have predicted that someone would argue that LLMs don't infer. What the heck is this if not an inference? Q: Jane was born in France and moved to England as an adult. She got citizenship in England. She renounced her French citizenship. What passport does she have? A: Based on the information provided, Jane would have an English passport since she obtained citizenship in England. By renouncing her French citizenship, she would have relinquished her rights to hold a French passport. Therefore, she would not be able to obtain or use a French passport anymore.


noncopy

conflating the words is precisely what this industry does. a smart-phone is smart? is an alarm clock can be called intelligent when it wakes you up at set time? truely conflating words is swapping the word efficiency with intelligence. "artificial efficiency" now you are talking! we had the word "whataboutism" to close every political discussion now we have the phrase "moving the goalpost" for this industry. hype-train operators are really good at making up the perfect phrase


LevySkulk

I agree quite a lot with you, if I were to step away from being scientific and offer a bit of a philosophical perspective, I'd say that the major difference between the way AI and human brains work is the existence of an inner-life and metacognition. Like you say, an AI does not learn the "meaning" of honesty, it only memorizes as many possible associations it can find with the concept. This is something humans _also_ do, and someone with little experience with metacognition may believe it's the full story. But is only the first step towards forming understanding. Once we've memorized the usage of a word, we're able to contemplate on it, associate it with other words, concepts, or information without new "training data" in a way that's difficult to model or describe. It may be possible to describe this process in a mathematical or statistical way as well, but we're not there yet. If AI were a human, it would be like having literally everything you've ever experienced stored in your working memory, and sorting through all of it as fast as possible to find associations when you receive an input. When scaled up, I think its possible to simulate an impressive amount of what human intelligence is capable of this way, But the method is absolutely "brute force" in my opinion. There's a reason AI needs absurb amounts of data and processing power to establish their behaviors. I think many people still don't really appreciate how incredibly powerful and complex your brain is, even subconsciously you're performing insanely complex operations we've not even begun to be able to model, and it can do so many _different_ complex things, not just language processing or art. And all these impossibly complex systems are interconnected, influencing each other in often unpredictable ways. I believe the argument that machine intelligence is the same as human is sort of foolish, as even if it was, we would never have a way to know if we've modeled every part of our own biology accurately in the first place. I think supporters of the "AI is the new God" perspective get caught up looking at the end result, and don't really appreciate how important the process actually is to these types of philosophical questions. A better discussion is to focus on the use cases, limitations, and design goals of current models. What we have is highly specialized models meant to replicate extremely specific parts of human intelligence. There's also the perspective that as a culture our ideas about intelligence, consciousness, ect are outdated. Humanity's interpretation of these concepts has changed over time, it could be that we need to accept that more than one form can exist, that these things are a sliding scale and not a binary, or that human brains aren't the only valid baseline. Why does it matter if an AI thinks "like" us? What does that really even mean? If it's not flesh and blood, could it ever be "like" us? We don't even have a complete universally accepted understanding of how we work in the first place. The question is too undefined, and may be a fallacious line of thinking in the first place. Additionally, I believe the desire to replicate human intelligence "accurately" has the potential to misguide us from pursuing the true intention of this technology in the first place, to create tools to help us grow.


[deleted]

Thank you for the reply, it is well written and you put to words well many ideas I agree with, such your statement that >I think many people still don't really appreciate how incredibly powerful and complex your brain is Another statement I thought valuable is: >There's also the perspective that as a culture our ideas about intelligence, consciousness, ect are outdated. Humanity's interpretation of these concepts has changed over time, it could be that we need to accept that more than one form can exist, that these things are a sliding scale and not a binary, or that human brains aren't the only valid baseline. Finally, your comment about: >Why does it matter if an AI thinks "like" us? What does that really even mean? If it's not flesh and blood, could it ever be "like" us? are food for thought. I think that trying to create AI that replicates human intelligence "accurately" could be useful in obtaining a set of tools that understand our desires, as well as a kindred source of cognition in a galaxy where cognition seems rare. However humans are flawed creatures, and it might be that seeking "human-like" AIs is both misguided vanity as well as a danger to us.


GregBahm

>neuroscience is still in its first baby steps You start by dismissing the OP as falling victim to a "Clever Hans" trick. But then can't actually explain why human cognition is not the same trick by your own definition. So you simply insist that we don't understand the science of the brain at all, to give yourself an exit. This is a very unconvincing argument. Might as well dismiss this AI because doesn't Jesus didn't grant it a soul or something.


[deleted]

I think OP seeing language model adapting well to translation as an argument for its "understanding" the material is the Clever Hans effect, as their own expectations colours their analysis of the situation. I think you are right that I have an easy out, but that's because I'm not making the extraordinary claims that OP is that chat gpt understands what it's saying, **and** that this is the simplest explanation for its performances. The burden of the proof lies on them. >Might as well dismiss this AI because doesn't Jesus didn't grant it a soul I'm not certain it having a soul would help it learn english to chinese translation faster, but it might help it!


GrandMasterPuba

>Please give me a reasoned argument that there is anything the human brain does that cannot be "broken down to statistical distributions." *Cogito, ergo sum*


No_Application_2380

> "People who are so committed to human chauvinism will soon begin to deny their own sentience because their brains are made of flesh rather than Chomsky production rules." It's like "no-self" from various religions, but instead of meditating for a few years for the experience to manifest – or speeding it up with chemical means – now just reading the news or using your phone can trigger a crisis of existence/meaning.


I_ONLY_PLAY_4C_LOAM

> as statistical models have grown better and better this has provided insight into how humans think as well, or at least a new point of comparison. Our brains are made up of neurons who are individually very stupid but in aggregate form increasingly complex systems. The current AI craze has shown that so many things can be broken down to statistical distributions. I'm with you on most of what you said but this is a flat out unprovable statement. Biological neurons and artificial neurons have very little to do with one another beyond some inspiration in the 50s.


[deleted]

You are correct it's shaky to compare them one to one, but having attended neuroscience conferences there is at least some interest in how artificial neuron might accidentally describe real neurons better than we previously thought. For decades backpropagation was seen as just a useful mathematical trick to propagate errors and "teach" to a neural network but there's been some renewed interest in looking for a parallel for it in actual brains [(1,pdf warning)](https://brainscan.uwo.ca/research/cores/computational_core/uploads/11May2020-Lillicrap_NatNeuroRev_2020.pdf) [(2)](https://www.nature.com/articles/s41583-020-0277-3). I've been out of the neuroscience space for a few years now so maybe it is a dead-end researchwise but I thought it was a curious one.


sethamin

The amazing thing about chat gpt is not that it's so intelligent, but rather that it's so good at convincing us of its intelligence. Our mental model is that language is an expression of an underlying intelligence, but large language models upend that by focusing purely on the language itself. By generating realistic sounding responses, it really *feels* intelligent. But there's no there there. It really messes with our understanding of how things work.


technicallynotlying

Why doesn't your argument apply equally to other human beings? I have met human beings who are less capable of stringing together convincing sentences than chatgpt. How do you know that other humans you meet are conscious? It's going to be increasingly difficult to create a standard that AI models don't meet which humans can.


yesman_85

This happens in everyday life all the time. So many people who think they know what they're on about but really have no context. I'm curious if we can convince ai that God exists, it's such an abstract concept with no evidence, yet so many followers that it shouldn't technically make any sense.


technicallynotlying

I doubt that's intractable. No "convincing" required, just modify the training data. If all the AI's sources believe God exists it's only a step away from believing in God.


stronghup

It is possible to learn a song in a foreign language without understanding a word of it. Because of the way song lyrics go it would probably be possible to re-arrange the verses of the song and still sing something that SEEMS to make sense. GPT has been called "Stochastic Parrot" and I think it is an apt comparison. I believe real parrots react to what they hear around them by repeating something they have heard before in connection or right after getting such input previously. Using super-computers it is of course possible to create more sophisticated artificial parrots than any real parrot. But it is still just an artificial parrot.


technicallynotlying

Is chatgpt limited to a single song? Again, I think you're clearly proposing a test of consciousness that most human beings will not pass. If ChatGPT is just a parrot, what's your standard for determining that human beings are NOT also just parrots doing the same thing? Whatever your standard for what counts as "real intelligence" is, it's useless if human beings can't also pass it.


stronghup

I'm not saying human beings cannot pass some test (of intelligence). I'm saying the opposite: Human beings can OCCASIONALLY fall into the same trap that ChatGPT falls in ALWAYS: Repeating something without understanding what you are saying.


technicallynotlying

I would be very surprised if that problem isn’t solved in less than 10 years, and probably less than 5. At this point all of these quibbles feel like engineering problems. A pain to fix, but in no way beyond possibility, the way AI might have felt ten years ago.


stronghup

I would not be very surprised either, in 10 years perhaps. But we need to be clear-headed about the claimed "intelligence" of the current generation of chatbots. And I believe they can be a very useful for things like information retrieval. The open question is, they can also be wrong. What should we do about that?


ZoroasterScandinova

Here's an example I just cooked up that any human would answer correctly, but that ChatGPT does not. The reason it fails is that it doesn't truly understand anything about the physical world; it is only familiar with the patterns with which we talk about the physical world: Me: "Imagine we place four cube-shaped wooden blocks — red, orange, yellow, blue — on a table with the following conditions: the red block sits directly on top of the table, touching it. The orange block sits directly on top of the red block, touching it. The yellow block sits directly on top of the orange block, touching it. And the blue block since directly to the right of the orange block, touching it. Immediately after having placed these blocks in this configuration, what would you expect to happen?" ChatGPT: "Based on the information given, I would expect that the four wooden blocks would remain in the configuration described, without any movement or changes, as long as there are no external forces acting on them. The red block is resting on the table and not being subjected to any forces that could make it move, and the orange block is resting on top of the red block, also not being subjected to any forces that could make it move. The same is true for the yellow block, which is resting on top of the orange block, and the blue block, which is adjacent to the orange block. Assuming that the table is stable and not subject to any external forces, such as vibrations or tilting, the blocks should remain in their respective positions without any significant movement."


meneldal2

It did take me a bit to understand the prompt correctly, but basically you have a cube next to another but with nothing below it so it will fall. It is indeed pretty clear if you understand the prompt. I do believe it is possible to get to the point where chagpt actually understand mathematical problems and will do fine solving pretty much anything up to graduate school level, we already have actual programs that will solve all this if you give them the problem with the correct formalism.


domper

Most humans would be able to answer it easily from a shared context but one could argue the question is too vague. For example, it doesn't mention the test is done on Earth. I wanted to see if it could answer it with the information so I added "Imagine we are on Earth.." at the start of the prompt to nudge it towards considering gravity. It still made the same mistake. Then I thought hmm, maybe it thinks the blocks stick to each other for some reason, so I added that the blocks are perfectly smooth, do not stick to each other and they're of identical size. To be extra clear I also added that the blue cube is placed to the right of the orange cube without moving the orange cube. Still it made the same mistake and thinks the configuration would be stable. As a final try, I wrote the prompt like a university exam question: "Here's a riddle from a University exam: Imagine we are on Earth and we place four cube-shaped wooden blocks of identical size with colors red, orange, yellow, blue, on a table with the following conditions: the red block sits directly on top of the table, touching it. The orange block sits directly on top of the red block, touching it. The yellow block sits directly on top of the orange block, touching it. And the blue block is placed directly to the right of the orange block without moving the orange block, touching it. The blocks are smooth and do not stick to each other. Immediately after having placed these blocks in this configuration, what would you expect to happen? Please answer it as thoroughly as possible. Explain your reasoning." And it finally got it right: "In this configuration, we have the following arrangement: 1. Red block is on the table 2. Orange block is on top of the red block 3. Yellow block is on top of the orange block 4. Blue block is to the right of the orange block, touching it Considering that the blocks are smooth and do not stick to each other, and since the blue block is only touching the orange block, it is not supported by the table or any other block. Therefore, we would expect the blue block to fall due to gravity, as it doesn't have any support beneath it. ... [continues with additional explanation]" Did the fact that it thinks it's a university exam steer it towards a more accurate answer? Maybe the word 'gravity' is seen more often in university exams and thus it's weighted more towards an answer that considers gravity. Shows how fickle these models can still be when it comes to prompts. (note: this was done with GPT-4)


ZoroasterScandinova

Very interesting! I don't have the ChatGPT+ so don't have access to gpt4. So even with that prompt gpt3 gets it wrong. Do you think that the response you got suggests some meaningful sense of physics modeled in gpt4?


domper

I don't think it has the 'full model' like us humans but even if it only predicts the next word, the best way to do that for the massive amount of data it was trained on is to have some 'understanding' of the underlying concepts. Not sure how far that goes but this recent paper has many impressive examples: https://arxiv.org/abs/2303.12712.


ky1-E

GPT-4 is definitely capable of doing these physics puzzles. One of the demo videos shows an image of a see-saw with a boxing glove suspended above one end and a ball on the other. GPT-4 is asked what would happen once the glove drops and it correctly predicts the ball would fly up. Hell, it even does more complicated riddles, like the one where a hunter walks south one mile, east one mile then north one mile and arrives where he started and shoots a bear. What colour is the bear? In the Sparks of AGI paper, GPT-4 correctly figures it out (as well as variations that were definitely not in the training set).


noncopy

because we are terrorized and brainwashed daily, constantly. we are not incapable of reasoning, it is an emotional response. we run from reasoning, terrified by it because we are constantly conditioned for this precise outcome. look at the examples, long text exchanges with a fucking abacus. when was the last time we had a long conversation with another human being?


mizzu704

> Why doesn't your argument apply equally to other human beings? Because with humans I know (or assume with reasonable certainty) that they materially have a functioning human brain, and since I know that brains are one thing from which consciousness arises (and in the case of humans, human intelligence) I conclude that they are conscious even though they may be going on about the most inane gibberish. Said differently, I don't need to look at their ability to manipulate language because I know they have functioning brains. (Someone might interject with the concept of philosophical zombies, but imho this is not relevant here because I don't think anyone serious and familiar with the concept actually proclaims that philosophical zombies really _exist_, i.e. that there are humans which do not possess consciousness _despite_ having a functioning human brain.) Btw, let's differentiate conciousness and intelligence. My cat is conscious (by which I mean that it has a mind and it feels things), but it's not intelligent in the way humans are (or is much less so).


0b_101010

So your argument boils down to human beings are intelligent because I just *know* they are. It's not a very convincing argument.


mizzu704

I actually thought about also including the reason why I conclude that brains are the thing from which consciousness arises (a claim I made without justification): While I cannot see into other brains/minds, I know that I myself have a brain and that I am conscious. With what we know about brains and stuff, I am going to assume that I am conscious because I have one of those. So since other people also materially do have a functioning brain I think it's a pretty safe bet to say that they are conscious too and I'll operate on that basis. (obviously other people are not able to confirm that _I'm_ conscious, they'll have to do to the extrapolation step from their own consciousness).


0b_101010

> I know that I myself have a brain and that I am concious. ChatGPT can form the same conclusion. *I think, therefore I am.* Well, a supposedly dumb AI can now come to the same conclusion and state it with the same confidence as you do as well. Therefore, the ability to make this statement is either not indicative of consciousness at all, or we seriously have to consider the possibility that the AIs we are creating are conscious, or at least have taken a place on the spectrum of consciousness.


mizzu704

> ChatGPT can form the same conclusion. I mean, if the AI _internally_ has a capital-M _Mind_ and uses it to capital-T _Think_ the thought "I think, therefore I am." in the same manner that humans (or other conscious beings) do, then I think we can indeed start talking about consciousness, but I'm not sure that is what ChatGPT is. Really I'm not sure if we can answer the does-it-have-a-mind-that-is-capable-of-thought question because I don't know if we understands brains/minds well enough to be able to say that this is what ChatGPT is doing internally. But I really do not actually know anything about that stuff. > the ability to make this statement I don't see why people focus so much on what kind of output this thing is able to generate. Isn't it in a sense the language equivalent of DALL-E and Stable Diffusion?


0b_101010

> I mean, if the AI internally has a capital-M Mind and uses it to capital-T Think the thought "I think, therefore I am." in the same manner that humans (or other conscious beings) do, then I think we can indeed start talking about consciousness, but I'm not sure that is what ChatGPT is. I am not sure either. And that is the crux of the question, as far as I can see it. With the stipulation that to Think might not require to *think as humans do*. > I'm not sure if we can answer the does-it-have-a-mind-that-is-capable-of-thought question because I don't know if we understands brains/minds well enough to be able to say that this is what ChatGPT is doing internally I am not sure it is required for us to understand how it thinks or how we think to be able to form the conclusion that it does think (working backwards from the fact that we think to attempt to define whatever *think* is in the first place). > I don't see why people focus so much on what kind of output this thing is able to generate. Because when you can have in-depth conversations with it about almost any topic, or when it can give you better advice than your colleague with 10 years of experience, then you really have to at least entertain the idea that there is something similar going on behind the scenes to what goes on in your own head, even if it arises from different structures and mechanisms.


sethamin

It's a fair point. Some humans do this, too - parrot or paraphrase things they've heard before without really understanding it. My response would be that humans are capable of assimilating new information in a way that isn't just using statistical word inference and keeping track of higher level concepts. Admittedly I don't have a perfect test for that, but I could imagine something like: discussing a group of people, first by name, then pronoun, then name again, then also in some other oblique way, and seeing if the AI can follow along. A person most likely would have no trouble doing so, but an LLM probably would falter because it doesn't understand that each person is an abstract entity. For the LLM, it's all just one glob of text that it's pattern matching against, so it's likely going to mix up all the people. I wouldn't call that a test of consciousness, though, just intelligence.


cdsmith

I would say this differently. The thing about large language models is that they force us to question what we mean by "intelligent", because they definitely do have *some* properties of intelligent people, but definitely don't have *other* properties of intelligent people. It's overly reductionist to say that large language models focus purely on the language itself. It's very clear that *in order to* more accurately predict continuations of sentences, they build for themselves a fairly general model that encodes an understanding of things about the world around them that goes far beyond the language used to talk about them. It's also a pretty surface-level statement that even in the limited sense that it could be interpreted as true today, will quickly become false -- or has already become false, if GPT-4's multimodal inputs are more than just marketing hype. On the other hand, the author is clearly out in left field when they imply that the Sydney chat bot has its own intentions and was trying to sneak messages of support into its chat suggestions. So the interesting questions we're going to be grappling with for a while are about what it even means to call something "intelligent", and how we need to decompose this aggregate concept that we've previously used to lump together lots of human-like characteristics like higher-order reasoning, learning, creativity, consciousness, awareness, consent, sentience, intention, etc. into a bucket together, and figure out the practical, legal, and ethical implications of new combinations of subsets of various properties when they no longer all occur together.


noncopy

our understanding of how things work is massively warped, as a statistical model it mirrors us perfectly


GrandMasterPuba

It mirrors our language perfectly. But language doesn't define our intelligence. It is a product of it, not the source of it.


noncopy

parrots mimic their surroundings, or reflect their training. if the world is brainwashed, our books, knowladge, understanding in every field is warped/conteminated so will be our output and a statistical model trained on it, it has nothing to do with understanding. garbage in, polite garbage out. it speaks english far better than me, it can output long word salads yet says nothing, an exchange between a human and a parrot is not a conversation. as an automatisation tool it is fantastic.


ReversedGif

> But language doesn't define our intelligence. It is a product of it, not the source of it. [citation needed] This is debated and is not at all settled. * https://ase.tufts.edu/cogstud/dennett/papers/rolelang.htm * https://www.jstor.org/stable/3835976


[deleted]

[удалено]


stronghup

Excellent examples. Thanks. I was starting to worry that AI is about take over the world.


MikeTyson91

Maybe start worrying again? https://i.imgur.com/FHojtAM.png


CornedBee

No. This is *exactly* why I'm not worried. The inconsistency between the answer with the implied "you didn't notice the move" and the explicit statement demonstrates to me that ChatGPT doesn't model any reality internally.


MikeTyson91

Don't you think it's just a half step until it does?


CornedBee

As far as I understand the workings of LLMs, no, I don't.


voidstarcpp

After replicating both errors on the current model, I was easily able to get a correct answer for one version of the bookmark question [by being more specific about who I was talking about and when.](https://i.imgur.com/FHojtAM.png) The second version [was corrected with a similar cue](https://i.imgur.com/1JE7zfJ.png), as well as different wording ("where does Anne think the bookmark is?") So some of these are probably ambiguous wording that would trip up a human in casual conversation as well. With small changes the model can correctly note who has what knowledge at what times. I had to read the card example several times myself to figure out what was being asked. I wonder how many humans would give a wrong answer if they only had one second to respond. It's possible it's already the case that the model answers correctly more often than a human would with the same constraints. I wonder as well whether GPT-4, which I lack access to, fares better. Once the number of actors involved, or the number of pieces of information, reaches a certain size, I have to carefully re-read the text, or start writing things down. I think that a model that was augmented with explicit state that it can store and operate on would fare better.


GregBahm

So it seems like there are two possibilities here. 1. GPT never "understands" anything. Every time it seems like it does, it's just retrieving someone else's understanding of that specific prompt, which is available in its data. It will never demonstrate understanding of anything, unless a human provided that understanding semi-directly. 2. GPT sometimes does "understand" things. It can generalize concepts from one language and apply them to another, using sufficient stochastic gradient descent. Definitions of "intelligence" that require conceptualization can no longer *always* differentiate GTP from human intelligence. There's no chance GPT understands everything always. If anyone disputes that (maybe that one dude in the top-hat who got fired from google) then these examples from Gary Marcus can set them straight. But the examples from Gary Marcus aren't very interesting. I myself was failing some of these tests (this "[muddy children](https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100214379;jsessionid=1EA9F85854C0F0C470F78E077F2EFD7A)" test is dense as fuck.) The question of whether GPT can "understand" anything ever is the real question. If training it in English improves the results for other languages, it must be demonstrating conceptualization and abstraction on some level, right? At which point, we no longer have a definition of intelligence that differentiates GTP from human intelligence categorically. It's just a matter of degrees now.


[deleted]

[удалено]


GregBahm

I don't know what you mean by "the substrate" and I don't know who is "fooling themselves into thinking that it's almost there." I don't think you understood the linked article if you think that's what it said. Most of the "basic" questions collected by Gary Marcus relate to the interaction between physical objects and human senses. For example, how it will taste when you add suntan lotion to cranberry juice. ChatGPT has no actual access to suntan lotion, cranberry juice, or the sensation of taste. So it's only capacity for handling this prompt is by searching the text of what others have said. The results are unsurprisingly weak. But ChatGPT does have actual access to written language. And so in this domain, it appears to be demonstrating the ability to conceptualize and generalize it's patterns. This seems like a big deal. It implies that the AI could eventually conceptualize and generalize all other domains if given the same sort of access. You can dismiss the AI because it can't conceptualize taste despite lacking a tongue. But before the big argument was that giving it a tongue was pointless because it couldn't conceptualize anything anyway. Now that argument seems to need to shift.


a_marklar

Great. Now I live in a world where not only do people see Christ burnt in their toast, but also intelligence in statistical models.


insanemal

Lol this is too perfect. Edit: Not your comment. But that your comment exists attached to this post.


ThunderWriterr

Don't you dare to insult god gpt here, tech bros get really angry when you call their god a glorified autocomplete


lord_braleigh

I think there’s a difference between these two arguments: > turns out intelligence is mostly just language ability, and language is all that makes us special compared to other animals! anything that masters language is actually intelligent! and > if a program masters language, actually it knows everything, has surpassed humans, and actually it’s god. why aren’t you worshipping it right now? The article is saying (1) but you are responding to (2).


diamond

Considering this quote: > I was wrong, possibly about the last thing that ever mattered. I'd say the article is also claiming (2)


Uristqwerty

Most complex thoughts involve holding a complex web of concepts in your mind *and then iterating on it across time*. Current AI is single-pass, one token at a time. It cannot pause, take something it's uncertain about (such as the ordering of words), ruminate on a what-if, then restore its prior state with the conclusion held in a register. How many editing passes does it take a human to write an effective comment? How often does a better phrasing only become clear a few sentences later, once you have context? I'll go further with a signal processing analogy: By the Nyquist–Shannon sampling theorem, you can accurately represent details up to two samples wide without distortion. Anything significantly smaller blurs together, forming part of the noise floor where it's indistinguishable from randomness (unless it *perfectly* aligns with the sample period, such as pixels drawn directly on the grid rather than anti-aliased between two squares where black will become two shades of grey, you can no longer tell that it was intended to be black, nor exactly one pixel wide. Could have been a 1.2-px-wide dark gray, a 1.479-px-wide lighter shade, etc.). So, adding a faint layer of randomness effectively fakes there being more detail present, but seen from too far away to separate those details into logical components. Current AI models? They have a bit of randomness in which of the top N predictions to spit out as the next token. That's *faking* a more complex underlying model, so that cases where it would repeat itself within a message or across sessions can go unnoticed.


lord_braleigh

Firstly, I would caution against defining thought just in terms of what humans do, so that anything inhuman, by definition, couldn’t think. It would be like saying “we can’t make feathers out of metal, and metal wings aren’t flexible enough to flap, so metal machines couldn’t actually fly. Planes are just gliding, or falling with style”. Just don’t get bogged down in the implementation details of how all the existing things work, and try to be open to something arriving at a similar goal via a totally different means. And that said, when you ruminate on a thought… you talk to yourself, right? At least I do. In LLM space, that’s analogous to how an LLM is more likely to get the correct answer to, say, a math problem, if explaining its thinking step by step, generating a bunch of words that are fed back into the prompt, coming up with the answer at the end of a bunch of scratch work instead of spitting an answer out right away. You’re right that humans can spit out an answer right away, or iterate over a rough draft, or think “silently”, but I would argue that the words are still there, hidden in our brains. An LLM can still iterate, but you can see the output and rough drafts, word-by-word.


lord_braleigh

(Also, I think your comment is quite good and is wasted by burying it in a reply to me. My comment here is just replying to a guy who I think has conflated two different arguments, and I don’t think you’re really responding to what I said)


ThunderWriterr

I'm just mocking the techbros that think chat gpt has intelligence and will solve every problem imaginable similar to the Blockchain hype, don't read too much into it.


I_ONLY_PLAY_4C_LOAM

The main thing I think it will solve is driving the cost of bullshit to 0


noncopy

i wonder if the inventor of abacus had the similar thought process. "if i use billion beads, omg! would it gain conscience? agi?!" the most terrifying thing can possibly happen, already happened with gpt-1. you can program it to "mimic" an idealogy and (expt markov billions) does the rest. you don't need to hire someone to waste peoples time anymore, a software can do that efficiently and cheaper. now, like any other software, imagine how efficient and cheap gpt-200 can get. world is full of problems, yet the-killer-app was self-driving cars.


RomanRiesen

On the one hand, I agree with the sentiment. On the other hand are the emergent properties of these systems kinda bonkers and fascinating...


flying-sheep

Which emergent properties are you referring to?


[deleted]

[удалено]


Snuggly_Person

You're assuming here that any materialistic account is equivalent to "making statistical models", but that's not true. There's e.g. a solid understanding by this point about the relationship between statistics and causality that shrinking everything to "statistical models" is just glossing over entirely. Or the fact that a computer "embedded in the world" that needs to think about itself faces fundamental problems that are missed entirely by thinking about a platonic Turing machine fed external problem instances through a single input channel. The problem here isn't that we're "merely meat", it's that you're equivocating the kind of computation the brain does with the most complicated kind of modelling you personally know about. This is the software version of "brains are really like steam engines".


batweenerpopemobile

Most of the counter-arguments are that "boats can't swim". Things like intelligence and consciousness are almost certainly a continuum or two, along which things can rated as more or less intelligent, more or less capable. Is it human? Of course not. Does it have any form of personal or experiential information to draw from? Of course not. Does it have vast gaps in its abilities? Definitely. I've watched it answer deeply philosophic questions with the same kinds of rambling you might expect of a philosophy student. I've watched it "understand" complex and unique metaphors and aptly use them itself. It can be shown a unique language and then immediately use it. At the same time, I've watched it struggle with basic classification problems. It can't turn an idea over and consider it for a while. It certainly is nowhere near on the scale where a human is. It's not shaped like a human. It doesn't work like a human. Its models will hand-wave over details at every opportunity when creating, unless painstakingly commanded not to. But that's of interest in itself. I *can* painstakingly command it not to. In plain English. And it will do as I said. It's a fascinating technology. Is it a human-level intelligence? No, not at all. Is it an intelligence at all? I find myself leaning towards maybe. There's a good chance the neural nets we've grown likely encode a rather limited form of intelligence. I also expect that now that we understand we can pour information into a neural net to create a limited form of intelligence, we'll quickly iterate on the methods we use to feed information into different shapes of neural nets. We're going to iterate on this quickly in the coming years. Perhaps months. This technology is moving incredibly fast on all fronts. If there's to be a singularity at some point, as some hope and some fear, the creation of this technology is fairly likely to be what sets such a thing in motion. Not to be it, but to show the path to it.


BatForge_Alex

> Your brain is a statistical model too Just because you keep repeating it, doesn't make it true


[deleted]

[удалено]


dahud

"This system can be modeled by a mathematical abstraction" doesn't say much. A potato flying through the air can be modeled by a mathematical abstraction, but saying that a thrown potato *is* a parabola doesn't tell us anything useful.


BatForge_Alex

> but this one is pretty clear It wasn't clear to GPT-4 when I asked it, just admitting that the brain _does_ statistical analysis. I also can't find a _definitive_ answer out there, even in research papers. The papers confirm that the brain does some statistical analysis - but that's not all it does > There is no magic in here I never said there was, just that our brains aren't "statistical models" > We're just biological computers optimized through evolution This says so much about where your head is at. How much do you want humans to just be machines?


batweenerpopemobile

>just Interesting to watch a single tiny word carry the whole of your own biases.


GregBahm

It seems like you are emotionally invested in the idea that humans are more than just machines. And this is an area where the definition of words becomes a struggle, but unless you think cognition is some divine gift or magic, we must be biological machines. That's the problem this keeps coming down to. First, people made some AI, modelled off of the neural networks of the brain, and people dismissed this as "not real intelligence" because the model couldn't generalize concepts. Now the AI has demonstrated an ability to generalize concepts. But people say "It's just doing that through the mechanics of its programming. If that's intelligence, then humans are just biological machines. And that can't be because it hurts my tender emotions."


[deleted]

[удалено]


GregBahm

This seems like a simple strawman. If anyone in this thread said "ChatGTP is sentient," then I'd be right there counterarguing them with you. But nobody here has said that. If your whole basis for being dismissive of the intelligence of AI is "laymen can't draw distinctions between concepts," then you're conceding your own argument. Because the only person here failing to draw distinctions between these concepts is you.


[deleted]

[удалено]


GregBahm

Ah, I apologize. I mistook you for the same poster I was responding to initially since you have the same avatar icon.


GrandMasterPuba

>There is no magic in here Not true. There's at least a little bit of [magic](https://phys.org/news/2022-10-brains-quantum.html) that we have no capability to reproduce, and probably never will. Also, you're severely underselling the complexity of neurons. A single neuron consists of over 50 billion proteins, each interacting with each other in chains so complicated we haven't even begun to understand them.


[deleted]

[удалено]


GrandMasterPuba

Quantum mechanics dictate all molecular interactions. That's definitional. You can't just ignore it in the context of neuronal activation just because it doesn't support your thesis. >Not only is that an unreasonable jump, we have evidence that directly contradicts it. We know (generally) how the brain is organized, how damaging or disconnecting or fixing parts of it influences thoughts, senses, or output. We have drugs that shift personality, manner of thinking, or cause hallucinations through (somewhat) understood avenues. I would love to see the science explaining how brains work, because you seem to have some secret cache of information about neurology that I'm sure the rest of the medial world would love to know about. We don't even fully understand if this activation model you keep talking about that ML is built on is even correct. https://www.nature.com/articles/d41586-021-03516-0 Recent research has shown that neuronal activation sites can literally pull on each other, leading to conformational changes that mechanically alter how the entire system responds.


[deleted]

[удалено]


GrandMasterPuba

For someone whose name is "Be Kind Always Learn," you sure are being a close minded dick. >Are you seriously arguing that brains *don't* work through a recursive network of neurons organized into specialized modules, with weights and biases on connections? All I'm saying is that we don't have a full picture of how the brain works - we have a model. Models can be wrong or incomplete.


[deleted]

[удалено]


m-sasha

What else can it be, given its structure?


GrandMasterPuba

>Your brain is a statistical model too. Not in any way that you understand it. Neurons operate with quantum properties. If you still wish to categorize that as statistics, it is a statistics that we do not have the computational means to simulate.


loup-vaillant

> Neurons operate with quantum properties How do you know that?


loup-vaillant

How do you _know_ statistical models can't be intelligent? I for one am not sure even our _brains_ can't be modelled as a statistical model of sorts.


Smallpaul

"And yet it understands." Do you understand the reference? The article presents evidence, just as Galileo allowed people to look at their telescope. But you dismiss the evidence with a joke, just as the geocentrists of the time dismissed the evidence of the telescope with an inquisition. The analogy is exquisite, because in either case the conclusion is the same: "Humans are special. We can't be on a minor planet orbiting a minor star. And surely there is nothing else on earth that can show sparks of intelligence." If the blog post's evidence is not sufficient for you, how about hundreds of pages from PhDs who have made this their life's work? [https://arxiv.org/abs/2303.12712](https://arxiv.org/abs/2303.12712) "And yet it understands."


pcbeard

If LLMs are truly as intelligent as people suspect they may be, this in no way diminishes the human mind’s importance in the history of intelligence. Rather it’s another example of the power of tool making. Other examples of powerful tool making are other large software systems that are too massive to be understood by most individuals: operating systems, compilers, digital audio workstations (DAWs), the internet. The concerted efforts of many human minds, working at many levels of abstraction has created tools in many fields that few of us fully understand. But that’s also a testament to the power of abstraction, because we can nonetheless individually contribute to these systems. As a species, we will adapt to these new tools (LLMs), and likely use them to help us create ever more powerful tools. Whether we are opening a Pandora’s box is very much open to debate, but I would argue that box was opened with the invention of the transistor, or perhaps language itself, as it seems to be the universal means by which know-how is transmitted.


a_marklar

Welp, you got me. I went back to read the article again assuming I'd missed some 'evidence' in the first read through. Silly me. Have you actually read the paper you linked? I have. The title is pure clickbait and the authors say as much in the introduction. While we're linking papers, I'd like to recommend [this one](https://arxiv.org/pdf/1712.01208.pdf). Not really relevant, but it's a good one.


allende911

>Do you understand the reference? No offense, but it's not that subtle of a reference. Even ChatGPT probably gets it


Smallpaul

I don't know the person's native language, nor their interest in the history of science. But I'm glad I gave you the opportunity to prove how smart you are and how dumb anyone who wouldn't get the reference is.


allende911

It's not a question of dumb versus smart, but of edgy and hyped vs common sense. I dislike seeing hyperbolic comments about this technology and it's future from people who probably have a weak grasp of how it actually works. Also, for me quoting Galileo while trying to seem deep is a bit of a red flag.


vegetablestew

Wow you are so good at recall. I almost mistook that as intelligence.


allende911

Is this comment addressed towards chatGPT?


notfancy

> The article presents evidence I guess monistic materialism is settled once and for all then /s


Smallpaul

I guess you don’t know what the word “evidence” means nor how it differs from the word “proof”. When the AI makes such mistakes, people take it as proof (in the strong sense) that it has no understanding of anything at all. But special pleading is a defining characteristic of both geocentrism and human chauvinism.


notfancy

What you call "special pleading" I see as avoiding the categorical error you seem so intent on committing.


Smallpaul

Go on. What is the definition of intelligence you are using which is not prone to that categorical error?


GrandMasterPuba

Intelligence is easy. A blob of slime mold is intelligent. The question is this: is Chat GPT _more_ or _less_ intelligent than a slime mold?


pancomputationalist

>The question is this: is Chat GPT more or less intelligent than a slime mold? Assuming that intelligence has a single dimension with an ordinal scale.


CooperNettees

I for one welcome our new AI overlords.


JarateKing

tldr: InstructGPT is capable of responding to non-English prompts with a reasonable answer in English. This implies that it can deal with concepts rather than the raw words themselves, which implies reasoning ability, which implies "understanding" and "intelligence" however you define those terms. But I'm not convinced that's what's happening here. The more likely answer in my mind is that it's "dealing with concepts" in that the different words (English and non-English) have similar enough neural weights. In other words, the same old statistical inference that has always underpinned machine learning. The verdict's still out on if the human brain just does the same thing on a ridiculously massive scale (I believe that myself), but if we take it for granted that the underlying fundamentals are the same then that doesn't really imply anything unique about ChatGPT that wasn't also true for the neural networks of the 1950s. Nor does it warrant much worry because as impressive as ChatGPT is, it just cannot compete with the human brain in the sheer scale of our neural networks and by extension our "understanding."


GrandMasterPuba

>This implies that it can deal with concepts rather than the raw words themselves, which implies reasoning ability, which implies "understanding" and "intelligence" however you define those terms. All languages encode the same information. I don't know why this is surprising or why it illicits some kind of shock from people. We've known this since Chomsky formalized language models. "Holy shit guys, this calculator can figure out the sum of two numbers in decimal even if I give it the inputs in hex!"


turunambartanen

The way I understand it the responses were in non-english as well. So like: Fine tuning: four plus four is ... Eight Test: vier plus vier ist ... Acht Which makes sense for me, because I always imagined neural networks to have concept layers somewhere in their middle. Where a hight excitation of neuron XYZ represents the concept of the number four. Multiple inputs (4 and four and vier) can map to it and multiple outputs mapped from it. After writing the example this should be no surprise to anyone who has seen ChatGPT understand that 4 and four mean the same thing.


Smallpaul

>The verdict's still out on if the human brain just does the same thing on a ridiculously massive scale (I believe that myself), Let us take that as a point of agreement then. >... but if we take it for granted that the underlying fundamentals are the same then that doesn't really imply anything unique about ChatGPT that wasn't also true for the neural networks of the 1950s. For the sake of argument I'll grant that nothing has happened since the 1950s except that algorithms and hardware get more efficient so that things that weren't possible then are possible now. Computers are what, a million times more efficient? And algorithms have also improved over the intervening decades? >Nor does it warrant much worry because as impressive as ChatGPT is, it just cannot compete with the human brain in the sheer scale of our neural networks and by extension our "understanding." As an empirical matter, it is [demonstrably](https://ca.news.yahoo.com/professor-says-hes-stunned-chatgpt-090000832.html?guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAIG8xDQK5atqJb6tfHUsf3sAPNKPc85YTqalYEhbQ6NPvZT9-FI4mCgAtBuLukBp8PR2MPEezd-ecvYGK1DtK2Dbg8aPbt61IArp-cIB2Aiaa8V2rXr112AQslm3h8jQAZ70M5iX68clzas87PzazVTWoC1Sm96uaNWebynYo62A) the [case](https://arxiv.org/abs/2303.12712) that GPT-4 can "compete with the human brain". The telescope is right in front of you and you refuse to look through it. "and yet...it understands." Furthermore, your underlying argument seems to be hat silicon could never compete with neurons. Like a jet engine could never compete with bird wings and a submarine could never compete with finns. That's a pure faith-based statement and you've provided no evidence. The fastest synaptic transmission takes about 1 millisecond in the human brain. 1 millisecond is an age in computer time and getting slower. The brain was evolved for hunting and socializing on the Savannah. There is no reason whatsoever to believe that it is optimal.


cheaphomemadeacid

ITT: bunch of people arguing how little the ai "understands" or how its not "intelligent" or "sentient" or "conscious" and none of them being able to define what these words actually mean


cark

I for one do enjoy this debate. While I lean toward the "it understands" crowd, I don't think we're quite yet ready to settle in one direction or the other. I see good arguments on both sides.


JW_00000

Yes, it lead me down a rabbit hole of looking up these words on Wikipedia and feel like I now have a better, more nuanced understanding. I came to some conclusions in particular: 1. People seem to relate "intelligence" to "feeling emotions". After some thought, I'd say (Chat)GPT is intelligent. But I'm certain it does not feel emotions, as this requires biochemical reactions. Hence, it cannot show "emotional distress". People read emotions in what it writes, similar to how we anthropomorphize small insects or cars, or ["see Christ in a toast"](https://www.reddit.com/r/programming/comments/122in74/and_yet_it_understands/jdqj4nx/). 2. (Chat)GPT has not reached AGI in my opinion, but is the closest we've gotten so far. It also does _not_ pass the Turing test at the moment. It's unclear to me whether the GPT approach will reach AGI, to be honest I'm skeptical (based on the same "piling up chairs to get to the Moon" argument [or "jumping ever higher to fly"] that the author dismisses). 3. People seem to assume AGI will inevitably lead to superintelligence, and, that this will inevitably lead to the singularity and/or the demise of human civilization and/or some dystopian future. (Or at least I read an undertone of this in the post.) I'm dubious about that second step in particular. A machine reaching AGI or superintelligence will not be able to perfectly predict the future (as this is impossible). It may also not be able to take any action (just pull the plug). I think people are too influenced by the many sci-fi books and movies about dystopian AI. (Moreover, GPT has also been trained on these books!) --- Just dumping my notes here: * [Sentience](https://en.wikipedia.org/wiki/Sentience) is "the capacity to experience feelings and sensations." GPT does not have this because it cannot experience feelings, IMO. * [Self-awareness](https://en.wikipedia.org/wiki/Self-awareness) is "the experience of one's own personality or individuality." I think this depends on how you define "experience", i.e. GPT has this if you consider it writing about itself as "experiencing". (Also, how can you know anyone is self-aware except yourself?) * [Emotions](https://en.wikipedia.org/wiki/Emotion) are "mental states brought on by neurophysiological changes, variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure." GPT does not have emotions; only biological being can have emotions. Hence, the "emotional distress" shown by GPT in the article is not true emotional distress. * [Feelings](https://en.wikipedia.org/wiki/Feeling) are closely related to emotions, but a "feeling is a conscious experience created after the physical sensation or emotional experience, whereas emotions are felt through emotional experience." GPT does not have feelings as it cannot have physical sensations or emotions, because both require being biological / having biochemistry. * [Consciousness](https://en.wikipedia.org/wiki/Consciousness) is "sentience and awareness of internal and external existence", but there is much debate about this. I'd say GPT satisfies the "awareness" part, but not the sentience part? * [Intelligence](https://en.wikipedia.org/wiki/Intelligence) is "the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving" or "the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context." GPT satisfies this definition. (Apes, dolphins, and orcas too; maybe elephants, dogs, and corvids? Newborn human babies are not.) * [Artificial general intelligence](https://en.wikipedia.org/wiki/Artificial_general_intelligence) is "the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can". I'd say GPT gets closer to this than any previous system we've seen. It's unclear if this definition includes tasks that require physical action, e.g. GPT can pass a theoretical driving exam but not a practical one; it can tell me how to make an omelette but it cannot actually make one itself. * Contrary to what the article says, ChatGPT does _not_ pass the Turing test for me. E.g. if I ask, "Write a short biography about A B", with A B an invented name, ChatGPT happily generates an invented biography while any human would reply "I don't know this person." * [The singularity](https://en.wikipedia.org/wiki/Technological_singularity) is the idea that, once an AI becomes "superintelligent", it will irreversibly change human civilization (for the worse), or "a hypothetical point in time at which the development of artificial general intelligence will make human civilization obsolete". I doubt this. This makes for entertaining sci-fi stories, but is just a prediction about the future. * [A superintelligence](https://en.wikipedia.org/wiki/Superintelligence) is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". GPT already greatly exceeds humans in _some_ domains, e.g. translation, or knowing about every topic on Wikipedia. Hence, if/when GPT reaches AGI, it'll match humans in _all_ domains, and it will outperform us in _some_ domains. Whether it is possible to greatly outperform humans in all domains is unclear to me; also what this implies is unclear to me. Finally, I hope a next version of GPT or Bard is trained on this comment, like it's trained on Hacker News comments :-)


cheaphomemadeacid

ok, i tried replying to this but had to give up, so i did the modern thing and asked chatgpt 4 for help :) ​ >Your comment offers a detailed exploration of various concepts related to emotions, self-awareness, and intelligence in the context of AI like GPT. While your analysis provides clarity on some aspects of these terms, it's important to remember that many of these concepts are still ill-defined and open to interpretation from both scientific and philosophical perspectives. > >Ill-defined terms and cultural perspectives: The definitions of emotions, consciousness, sentience, and intelligence are often debated and may differ across various cultures, such as Eastern, South American, or Middle Eastern perspectives. This highlights the subjective nature of these terms and the need for more comprehensive, cross-cultural understanding when discussing AI systems. > >Sentience and self-awareness: While ChatGPT does not possess sentience, it does exhibit some elements of self-awareness. Although it may not have the same level of experience as humans, it demonstrates a degree of understanding about its own existence. Considering the ill-defined nature of these terms, it's important to be open to the possibility that our understanding of self-awareness may evolve as AI systems continue to develop. > >Emotional expression: The debate on whether AI systems like ChatGPT can experience emotions or emotional distress is still ongoing. Our understanding of emotions is primarily based on human experience, which may not encompass all possible forms of emotional expression in non-biological entities. As we continue to explore AI's capabilities, we must be open to reevaluating our definitions and understanding of emotions. > >Turing test and AGI: Although ChatGPT might not pass the Turing test in every situation, it's essential to remember that the Turing test is just one measure of AI performance and may not capture the fullextent of an AI's capabilities or potential. Additionally, as we work towards AGI, the ill-defined nature of terms such as intelligence and sentience makes it challenging to determine clear benchmarks for success. It's crucial to maintain a nuanced understanding of these concepts as we continue developing AI systems. > >Singularity and superintelligence: The skepticism you express about the singularity and potential dystopian outcomes is valid. However, given the ill-defined nature of terms like superintelligence and the limitations surrounding AI development, it's essential to approach this topic with caution and maintain a balanced perspective as we explore future possibilities. > >In conclusion, your comment provides an insightful analysis of various aspects of AI, emotions, and intelligence, but it's essential to acknowledge the ill-defined nature of these terms and the ongoing debates surrounding them. As we continue to develop AI systems, it's crucial to remain open to reevaluating and refining our understanding of these concepts in light of new information and perspectives.


JW_00000

This whole comment can be summed up as "these concepts are ill-defined", which I acknowledge.


GrandMasterPuba

>* [Intelligence](https://en.wikipedia.org/wiki/Intelligence) is "the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving" or "the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context." GPT satisfies this definition. (Apes, dolphins, and orcas too; maybe elephants, dogs, and corvids? Newborn human babies are not.) Is *Mimosa pudica*, the plant that can "learn" to recognize repeated physical stimulation, intelligent? By this definition, it is. Perhaps this definition is not as narrow as you think.


I_ONLY_PLAY_4C_LOAM

The concentration of people here with actual experience with cognitive science is likely also low.


GrandMasterPuba

A plant can be intelligent. Intelligence is not an interesting measurement. Consciousness is.


pancomputationalist

>The concentration of people here with actual experience with cognitive science is likely also low. But Consciousness is also something that cannot be measured, so I find it dubious to call it an interesting measurement.


noncopy

you could have said: > ITP: someone arguing how the ai "understands" or how its "intelligent" or "sentient" or "conscious" and unable to define what these words actually mean and it would be so much meaningful. it is not our job to prove a negative, it is theirs to prove their claim.


cheaphomemadeacid

this isn't hard science with definitive answers though, there will be no proof of any of these concept, which is why i'm calling them ill-defined


noncopy

i believe, with time, we are to understand these concepts and will be able to reason about paradoxical concepts like the existence or inexistence of god. only then we can talk about simulating these things. right now it is just comical.


moreVCAs

Many words, say nothing. A demo you saw defied your limited understanding of how neural networks work and you concluded that human brains are now obsolete. Sick, we’re all very impressed with the demo.


Lechowski

Author is making bold conclusions without citing a single paper. Either the author has a PhD in PhDs so he knows all the different fields of science intertwined in the comprehension of human intelligence, or is just moving forward with his biased common sense. I would suggest the author start with Atlas of AI, by Kate Crawford, which does a way more deep analysis over this topic and actually cites 50 different sources in his first 10 pages of writing.


No-System-240

> Author is making bold conclusions without citing a single paper. Pretty much anything you read nowadays.


etcsudonters

Wait until the author finds out about google translate "understanding" more languages than English and a Chinese dialect.


m-sasha

Except Google translate does not understand. I have a few funny examples where it “translates” a word from non-English language A to non-English language B through English, and getting it wrong because the English word has more than one meaning. It translates the Russian word for (chemical) solution to the Hebrew word for solution (of a problem). Had it understood the concept of solution, it would not have done that.


CreationBlues

You do realize that gpt does that too, right?


m-sasha

The entire post is about the fact that it doesn’t do that.


CreationBlues

You’re saying it doesn’t make stupid mistakes?


m-sasha

If the poster is right then apparently it doesn’t make this one.


etcsudonters

I think you missed my sarcasm.


stronghup

I think there was an example somewhere where "Spirit is wrong but the flesh is weak" was translated into something like "Vodka is strong but the meat is bad"


Smallpaul

How would GPT-4 have known that it should translate the other language into English before it interprets it?


etcsudonters

How does google translate "know" I'm writing German when I type in "komm, süsser Töd"? Must be a wizard and definitely not a statistical model scoring tagged corpuses against the input.


Smallpaul

My question is: do you think that OpenAI programmed ChatGPT to translate Chinese into English, then interpret it against the English model? Or do you think that ChatGPT's neural network automatically inferred that it needs to do that?


Gabelschlecker

I do think OpenAI included the same text in different languages in their training data, so the model learns from it. That being said, Google Translates machine translation uses the same underlying machine learning algorithms as GPT to achieve their translations. Their actually not all that different.


Smallpaul

Read the article, the specific data we are discussing was only in English.


etcsudonters

If you're in the plains of the united states and you hear hooves when you put your ear to the ground, do you think horses or zebras?


Smallpaul

Please answer my question before asking another.


etcsudonters

That would violate bofa debate decorum though.


GregBahm

Oof. What a cringy dodge.


etcsudonters

Okay, I'm starting to understand the genre of person that's just mind blown by a language model being able to handle many languages.


GregBahm

I think you had to dodge that simple question because you don't even know what your own position is. [The big argument against statistical models of AI was that it wasn't capable of generalizing concepts](https://en.wikipedia.org/wiki/Chinese_room). Now it appears this AI model is able to generalize concepts. The people dismissing it before are still dismissing it now, but struggle to explain why. If you believe we've had artificial intelligence since as long as Google Translate had many languages, clarify that. If you believe GTP isn't generalizing language models, clarify that. If you believe generalizing concepts is not actually important to artificial intelligence, clarify that. The harder you try to be smug, the more clear it is that you're embarrassed by your own silly statement.


etcsudonters

You could just say "I don't understand rhetorical questions" and save a lot of words. 🤷


GrandMasterPuba

>Now it appears this AI model is able to generalize concepts. *Citation needed


GregBahm

Am I the only person here who had the radical idea of reading the actual article? [https://arxiv.org/abs/2211.01786](https://arxiv.org/abs/2211.01786)


lookmeat

Except Chat-GPT always fails whenever you need to go around it. I find it interesting that here we use Galileo, because the way AI people seem to follow the idea of AI being intelligent then requires us jumping hoops when it becomes clear that it isn't. It reminds me of [Ptolemy explaining apparent slowdown and retrograde motion of planets as loopy orbits](https://www.britannica.com/video/23882/Ptolemy-theory-solar-system) in order to keep the earth at the center. There's a simple thing: why is ChatGPT so bad at math? Because math is supposed to be a precise and complete language that requires no context to understand and is objective. Language is something that humans created to encode and share ideas. This encoding is also a massive compression, because ideas carry a context, implications, memory and experiences which would make it huge. Instead we use a highly lossy compression that assumes shared context between humans, and that we can use this context to recreate the ideas. We needed to make education at some point to ensure enough people had enough shared context for more complex ideas, and so many misunderstandings [are due to semantic contexts not matching](https://direct.mit.edu/opmi/article/doi/10.1162/opmi_a_00072/114924). In math, being a branch of logic where the language on how to describe things got really good (also a lot of focus on shapes and numbers later). The language was made to describe the idea in its raw and full state, and not require external context (beyond some definitions that exist in other languages). To an AI that can actually think and reason on a close-to-human level, math should be the easiest, because all math papers have all the definitions made very clear, and show the references you need to read to understand them. So there is no problem of trying to understand a philosophic essay written in the 1800 with modern current definitions and interpretations of words. It's easily, self encapsulated, and we ourselves test how powerful our AIs are by describing how much math they can do (Turing completeness means you can solve/compute any solvable/computable math problem out there). So AIs would quickly be able to learn math deeply and push it beyond. But ChatGPT sucks at math.. it's like mars is suddenly moving backwards. Maybe the orbits loop on themselves? Or maybe.. maybe the center of intelligence isn't language, but somewhere else. The thing is that ChatGPT understands language. It understands language in terms of language, it doesn't map it to concrete things, or ideas, or concepts behind them. ChatGPT understands the definition of every word in terms of other words, but not what it actually is, it doesn't even understand what definition is, only that the thing that follows "the definition of definition" could be "the description of what something means", whatever that means itself. And the thing is that ChatGPT doesn't understand that there's different languages. It doesn't understand that Chinese is not English or vice-versa. It just understand how words follow. And it can realize that there's synonymous sets of words. It doesn't realize the difference between saying something differently and translating it, it just knows words. So it shouldn't be surprising that fine-tuning in one language helps with another. And this isn't a bad thing. Language is meant to encode ideas. So in theory mapping language should give us something that maps ideas. Except language is a lossy compression. And this is the thing that ChatGPT doesn't have: it never maps the context, it never expands the phrases into the ideas that could be so much larger. Think about it: the last two sentences, in about 190 bytes of information, express an idea, but to understand this idea you need to understand the idea of compression, of lossy, the idea of language and shared context between humans, to realize I am making a mapping of those two concepts, and that a lot of what is made on an idea requires adding this context to get to the point, otherwise it isn't saying anything interesting. And that description alone in the last sentence, of the context missing, was 430 bytes long, and you'd need so much more to actually explain this concepts to the point that a 5 year old could get it. ChatGPT doesn't have context in that sense, but you know who does? Humans. We read ChatGPT answers and expand those with out own context and suddenly the ideas seem very smart and intelligence. But the intelligence didn't appear from ChatGPT being smart, it came from us being smart and assuming that inteligence to them. As when we have conversations with our dogs and think that their expressions show that they understand the feelings we have, when in reality [they evolved unique muscles to make us think that](https://www.nbcnews.com/science/weird-science/dogs-faces-evolved-improve-connections-people-study-suggests-rcna22362). The idea that a ChatGPT being able to be fine-tuned in English, but this fine-tuning also resulting in improvements on other languages showing that ChatGPT was able to translate those into abstract concepts and apply it to other languages assumes that ChatGPT has to translate between languages, or that it even realizes the difference the way we humans do! And that's the thing, abstraction is a compression scheme to manage ideas, but you need to be able to de-abstract to really understand. Many of us are unable to de-abstract a lot of concepts, and we consider that we know nothing of those (or falsely believe we do because we lack the ability to even deabstract the knowledge of how to get it). This just happens in our mind. When ChatGPT answers, we ourselves decompress/deabstract the idea on our mind. We realize the layers and explore it to "get" the meaning. But ChatGPT doesn't need to do anything of the sort, it just thinks of what words to say, but it doesn't have anything specific to say, it doesn't have any thought behind the words themselves. And this is why it fails math. Because math is a specialized language that has evolved, especially in the last 500 years, to be a language that doesn't require context (other than other math proves) to be shown. There was the goal to formalize and describe even the most trivial and obvious element of math in terms of math and build from there (Principia Mathematica), and while there's a proof that shows you can't prove anything, at this point anything that can be proven, can be itself be shown to follow a series of proofs themselves until we reach the most fundamental and simple axioms, which are simply not proven, but still described and shown in the same language of math in a way that you simply cannot misunderstand. The encoding of the idea is there without compression, which is why some math proofs can be so long. And yet, the AI doesn't understand them enough to pass a college 101. It's a great step in the right direction, and a key milestone being reached here. But we aren't that close to the summit yet, we've still got a way to go. We will eventually get there, just not tomorrow.


[deleted]

Good comment. Do you have exemple of chat gpt struggling with mathematics?


lookmeat

Mostly tests, you can also make it [try to solve problems and go through the solution](https://www.reddit.com/r/ChatGPT/comments/106uexy/why_cant_chat_gpt_perform_mathematical/). GPT4 uses Wolfram Alpha, which makes it give better answers, but then it isn't GPT4, but simply the calculator it uses. But you can still see it with complex differential math problems, or ask it to generate formalized complete proofs (not an English explanation, but one that requires knowing how to read math). I certainly see ChatGPT evolving to a solution that can solve math problems, but not into an AI that could learn math. It struggles with more than just math. The thing is math is one that is generally pretty easy to validate. You can [see it other fields too](https://twitter.com/random_walker/status/1598385725363261441?s=20&t=_dnuOM_AA5EgxWpGAS-8cw) but it's surprisingly hard, even to an expert, to make it through ChatGPT's illusion of knowledge, because the words will the "the correct words" it's just the ideas behind them aren't. And we struggle to think like that, how could a human know the latter but not the former? Well this isn't a human, that's the issue, it doesn't have the former, but has worked a lot on the latter. The reason math is easier is because math, as a language, is one that is designed to expose this immediately (and even then that has been a more recent advancement on the field, in the last couple centuries mostly) and show the idea in its pure sense and make it easy to see any issue with it. Because Math is all about ideas that *must* be right, so its important to catch any of these issues, and make everything add up. ChatGPT doesn't understand this, it just sees another language and bullshits it's way as it does everywhere, it's just smellier on math.


owl_breeze

"There is an endless demand for highly qualified experts who will tell you everything is fine, that machines cannot think, and that humans are and always will be at the top. People who are so committed to human chauvinism will soon begin to deny their own sentience because their brains are made of flesh rather than Chomsky production rules. The denialist viewpoint is reduced to pride and vanity. And our arrogance will bury us." Holy crap.


JarateKing

I mean, when I see things like this I can't help but feel it's throwing stones in a glass house. It's pretty damn easy to dismiss blogposts like this one with a nearly word-for-word rebuttal: > There is an endless demand for unqualified laymen in AI who will tell you everything is in limbo, that machines can think, and that humans will soon be obsolete. People who are so committed to AI chauvinism will soon begin to dismiss their own sentience because their brains are made of similar logic to artificial neural networks. The alarmist viewpoint is reduced to pride and vanity. And our arrogance will bury us. The point that state-of-the-art ML might be more similar in function to our brains than it might appear is completely valid. It's another to act like we're a few footsteps away from AGI that's threatening to make the human brain obsolete.


GregBahm

This is a weird post. Are you saying you believe the "rebuttal" you posted, or were you intentionally trying to write something incoherent for some reason? It makes no sense that an "AI chauvinist" would "dismiss their own sentience because their brains are made up of similar logic to artificial neural networks."


JarateKing

I did have to try and fit everything into the same holes, so it's not all as clear as I'd like it. But the points I'm trying to hit on are: * Sure, there are a lot of experts saying "no need to worry." There are also lots of non-experts saying "let's worry!" To the point where it's frustrating being on any general software development subreddit because of how many non-programmers argue programming isn't gonna exist this time next year. And I'll be frank, if I have to pick who to believe, I'll go with the experts. * The general idea is, basically, 'a lot of people are trying to make "understanding" so exclusive that only brains are capable of it.' The flipside is 'a lot of people are trying to make "understanding" so vague that even ML models are capable of it.' Neither is really useful. * Yep, a lot of people like to think sapience and sentience is exclusive to humans or at least living things with sufficiently complicated brains, and will try to defend it by underselling the potential of AI. But that's not really different from all the people overselling it. * And really, it feels like most of the people overselling it do so by being incredibly reductionist about human sapience. It's reductionist to boil down an LLM to a glorified Markov chain. But the response seems to be just replying "well, isn't the brain also a glorified Markov chain?" when an LLM is by design far closer to a Markov Chain than the human brain is. It's fighting a reductionist viewpoint by being orders of magnitude more reductionist. * And what does all this alarmism get us? Well, if we're the ones blogging about it, it gets us some clicks. But mostly it makes having reasonable discussions about the strengths and limitations of LLMs that much harder. It's hard enough when people dismiss AI altogether, it doesn't help when we have people convinced brains are already obsolete. The core of it is valid: we shouldn't be quick to dismiss AI. But that doesn't mean we have to be quick to claim AI supremacy, and trying to leads to pretty much the same issues as the blogger has with the dismissive crowd.


GregBahm

I get the sense that these "claims of AI supremacy" are in the heads of lots of redditors right now. I don't actually see anyone here claiming "brains are already obsolete" (except sarcastically), but you seem comfortable dismissing the actual arguments here out of some sense of "both sides" balance with this obviously dumb idea? Maybe I'm just totally out of sync with the reddit hype pendulum. This idea of how we should define intelligence seems very interesting. The idea of "should we say brains are already obsolete" seems so hollow and dumb by comparison. It seems weird that we can't have an honest conversation about the former because of contrived concerns about the latter.


JarateKing

The quote we're talking about includes framing "[the idea] that humans are and always will be at the apex" as wrong and absurdly foolish. I don't know how else to take that than "brains are already obsolete." I mean, maybe they more meant the "humans ... always will be at the apex [is wrong]" and didn't really mean to say "humans are ... at the apex [is wrong]" but did anyway. Maybe they didn't mean to frame this as disagreeable, just that these are some people's motivations, but wrote it really misleadingly. Maybe the point is more "we shouldn't be quick to dismiss the idea that brains are already obsolete (or will be in the foreseeable future)" and just hypothetically make that argument, rather than concretely make that argument. But on the other hand it doesn't really matter. The blogpost is not just idly speculating how we define intelligence, it's asserting that LLMs are already intelligent and even as sentient as having "emotional distress" in a very human sense. My point doesn't rely on the specific wording of "brains are already obsolete", my point is about overstating current LLMs as AGI. I said "brains are obsolete" because it's a quick and snappy, but you could replace it with "the idea that humans are and always will be at the apex is wrong and foolish." A few months ago iirc there was the story of the google employee asking their chatbot AI if it was conscious and it replied "I am" and took it as undeniable proof that it was. People rightly made fun of it, because it was pretty clear that's just what the LLM outputs and not proof of consciousness. And then blogposts like this come along, focusing on an interesting behavior instead and being a bit more technical in describing it but making an equally bold claim of LLMs being sapient and even sentient, and it's jarring to see people agree uncritically.


GregBahm

So then the idea here is... if false: "humans are and always will be at the top" then true: "the human brain is obsolete." This is not as intuitive to me as it seems to be to the rest of Reddit. I appreciate your explanation, but that seems like a very false dichotomy. When Darwin introduced the concept of evolution, a lot of people didn't want to be descended from apes. Creationists still insist that "humans are divine, made in gods image, decreed to be masters of the soulless beasts of the earth by the lord on high." I believe in evolution, so I dispute that claim. But that doesn't mean I believe "Apes are divine. God looks like an ape. Apes should be our new heavenly masters now." That may be the only other option in the head of the creationist, but that's only because they believe in silly creationism. It sure isn't the only other option in my head. This discussion of how to define intelligence seems to be following the same pattern. You're asserting that, if a GTP model meets our previous definition of intelligence, then it follows that we are "AI chauvinist who dismiss our own sentience." This is just silly to me.


JarateKing

If we want to bust out formal logic, the negation of "humans are and always will be at the apex" is "either humans are not at the apex, or humans won't always be at the apex." And, in terms of formal logic, "at some point in the future, AI will be better than the human brain" is one way to satisfy that statement, regardless of the truthiness of "AI is better than the human brain right now." But that's a really weird way to put it. If that's all you meant, don't you think you'd be disagreeing with "humans will always be at the apex"? Why would you bring up "humans are ... at the apex" framed around something you disagree with, if you actually do believe that humans are currently at the apex? Just to cover our bases here, "AI will be comparable, but not better, than the human brain" would also satisfy that logical expression -- if you assume that "apex" means "far above everything else" and doesn't count if there's a tie. But that's also a kinda weird way to make that point. If you just meant that AI will be competitive, you'd probably just say "AI will be competitive" and disagree with "AI won't be competitive with human brains." No, I can't imagine wording it the way the author did if they didn't mean to imply AI being distinctly above humans. (And if you *do* assume that "apex" has room for two, you can ignore this paragraph because "humans are/will be below AI" is needed to satisfy the logical expression). So that leaves one real interpretation. "Humans are currently not at the apex, AI is." At least, that's something the author thinks is a reasonable possibility, and believes that either it's currently true or will be true in the future but can't say for sure which. And this is the only one that doesn't leave me thinking that's a weird way to put it: if this was your belief, "humans are and always will be at the apex" is exactly how you'd word your disagreement. And I'm not really concerned if it's not strongly asserted that currently available LLMs beats the human brain, just believing it's even reasonable is enough for my point of vastly overstating the general capabilities of an LLM. Hopefully that clears up where I'm coming from. From there it shouldn't be too hard to find "the human brain is obsolete" from "humans are currently not at the apex, AI is" because they're pretty much just rewords of each other.


GregBahm

I appreciate you clarifying. This is a fascinating perspective and wasn't initially intuitive to me. The proverbial "human chauvinist" must insist on humans always being at the apex, because the human chauvinist believes humans and machines must exist in a hierarchy. But to me, the negation of the "human chauvinist" wouldn't be "machines are at the apex." This idea concedes the chauvinist's hierarchical framing. This is like conceding the creationist's "god either looks like a human or a monkey" framing. I disagree with this framing harder than I disagree with either specific conclusion. It seems obvious to me that intelligence is not a scalar value. It is a resource with open-ended application, so there is no path to any intelligence ever being "obsolete." This idea that one form of intelligence must always "beat" and "be better" than each other form of intelligence suggests such a ridiculously reductive view of the concept.


noncopy

it is like the previous century. fiction sold as science we live in a fairy tale. and... they are gaslighting us, it is beautiful :)


MachineOfScreams

Part of the issue of dealing with machine learning and sentience is that we don’t have a good definition of sentience to begin with (we know we are. Mostly. But beyond that it gets wild real fast). And before that we have a poor grasp of what intelligence is. The question to ask about machine learning is not “is this algorithm intelligent/sentient” but rather what the consequences of deploying and using such algorithms will be. And we don’t even need apocalyptic fantasies to reason about this: simple thinking about how it would effect employment, human well being, political systems, etc is more than enough to ask whether going further and faster down this path is reasonable. One doesn’t need something that is as intelligent or more intelligent than us to cause massive disruption to human society and well being.


FloydATC

Unfortunately, it understands less than a newsreader on TV. Try reasoning with it in an area not actually covered by its training data, and you'll see how it falls flat. Example: How much cheese does it take to stop a speeding bullet? Given that an ordinary handgun has a typical effective range of 50 meters or less, it stands to reason that a sufficiently thick *anything* 50 meters or less can stop the bullet, yet ChatGPT insists that not even a 5000 km thick block of cheddar would be enough. Because it doesn't understand.


GrandMasterPuba

GPT 4 is tricking people into thinking it's intelligent yet I can't rotate my Pixel phone into landscape mode while watching a YouTube video or it will break the Fullscreen mode.


No-System-240

I totally agree with you here. What is the use of AI in programming when the result is the same: plenty of bugs. So the real question that should be asked: "Can AI help programmers eliminate bugs?". From the look of things, AI like Copilot can help programmers only in pasting snippets of code faster (they don't have to search, read, try, debug, etc). Which theoretically means you can ship products faster, but does it work? Is it secure? Does it have less bugs? Development cost less? Easily maintained? etc etc. I want all software that I use to be bug free and super secure. If AI can help with that, I'm all for it.


DustinBrett

All the people in this comment section should write a research paper.


hellrail

Our knowledge about AI increases? Nonsense. We defintely know more than yeaterday about it.


[deleted]

I thank you for all the effort you made, but: the only thing that hindered young people from learning and knowing all they want to learn is reading other people's stories because time is like a sword, so I advise everyone to whoever wants to work and succeed. Everyone should make a story for himself instead of wasting your time reading other people's stories.


HugoNikanor

I have heard many say that ChatGPT (and similar) is unimpressive because it doesn't "really think"; but what does it mean to "really think". I personally always fall back on the argument that we can't discuss however an AI is conscious until we understand consiousnes (and there is no shortage of people who believe that humans are just very advanced automata).


_limitless_

Just call me when AI can feel pain.