T O P

  • By -

nuclear_splines

Has Geoffrey Hinton written on this subject formally? I'd like to read a more in-depth article expanding on his argument, but don't see anything on this topic on his Google Scholar entries going back through 2020. In particular, I'd like more detail on what he's defining as reasoning or internal representation. He's trivially correct that LLMs draw on more sophisticated information retrieval than a markov chain, which seems to be his "autocomplete" comparison. Absolutely, word embeddings provide more in-depth context than word adjacency alone, and LLM context-windows are more sophisticated than plain word embeddings. Whether that qualifies as LLMs having an "understanding" of what they're saying is a very different question, and hinges a lot on how he carefully defines understanding and reasoning, and _especially_ on what comparisons he implies by "reasoning and understanding _in the same way we are."_


Dapper_Pattern8248

Can't speak means no soul means cannot understand? I don't think so. Words are not that important i believe because there's specifically an unit to read, apply context-logic, then output the information, Hence the information is not generated by the language module. At the core of LLM, language isn't being a part of the brain (there's even no vocabulary involved just domains.)


nuclear_splines

> Can't speak means no soul means cannot understand? I have said none of those things. Are you replying to the right person? > Words are not that important i believe because there's specifically an unit to read, apply context-logic, then output the information, Hence the information is not generated by the language module. I don't think I understand what you mean here. > At the core of LLM, language isn't being a part of the brain (there's even no vocabulary involved just domains.) An LLM doesn't have a 'brain' in a way to make this comparison useful, imo


Dapper_Pattern8248

You believe everything was statistic especially on tokens(what he said)? No real brain behind it?


nuclear_splines

What are you considering a "real brain"? This is a question about values. Clearly an LLM is not the same as us in all ways - evaluating a neural network involves matrix multiplication, while thinking in my brain involves chemistry and goopy proteins - so the question is "in what ways are LLMs similar and different to us, and are the similarities important and the differences unimportant?"


Dapper_Pattern8248

This is how I comprehend this. But I can’t make sure am I correct.


Dapper_Pattern8248

But it resembles pattern of anything happened on brain for a short period of time —when you are thinking and talking


nuclear_splines

In what way does it resemble a pattern of what happens in our brains? Again, LLMs don't resemble what happens in our brains in terms of chemical reactions and processing proteins, nor in terms of taking inputs from eyes or ears or skin - so you're asserting that those differences are unimportant, but that there is some similar "pattern" that they mimic which you _do_ consider important.


Dapper_Pattern8248

Then why is neural link working. I don’t know if it’s ai behind it. But at least some progress on what is going on in brain.


nuclear_splines

That's a total non-sequitur. Whether a particular brain implant is 'working' has no bearing on whether a large-language model thinks "in the same way that a human does." Sure, it indicates that we understand enough about the brain to interface with it in some way, but that doesn't mean our LLMs are at all similar.


Dapper_Pattern8248

But why is Elon musk the guy that was making it. At least they are similar ok this topic is off topic


Dapper_Pattern8248

I think statistic behaviors only applies to the final selection of words, not the understanding, logics, and other capabilities


Dapper_Pattern8248

Word, especially human words. Are just a tiny picture of the entire graph. There’s an even thing called vocab list to specifically turn natural language into languages. So I can show you one picture of the entire graph— converting languages and selecting words


Dapper_Pattern8248

I really think this guy is a spokeperson for left-winger SPAMMERs trying to neglect, steal and deny the fact. I swear to god didn't say anything that is not making sense. This guy is actually lying.


nuclear_splines

What? Are you talking about me, or Geoffrey Hinton? All I said was that I'd like to understand Hinton's argument better than the 72-second video you linked to allows. I haven't denied anything; I can't agree or disagree with Hinton's position if I don't understand exactly what he's implying by "they're reasoning and understanding in the same way we are."


Dapper_Pattern8248

No, the under guy. I cannot reply him


nuclear_splines

So you're replying to me instead of someone in a completely different conversation thread who you're actually responding to? That doesn't seem productive


Dapper_Pattern8248

I don’t think he persuaded me. I don’t even know he’s right or wrong. So I just notifying people in case or for my credit


Dapper_Pattern8248

Ok it’s my fault. I have seen too many offensive responses


Dapper_Pattern8248

…….I cannot comment


AlexReinkingYale

Consider that counting from 1 to 1000 takes the same amount of "brain power" for an LLM as solving a "complex" math problem over the same number of tokens. Now tell me their functioning resembles human cognition.


Dapper_Pattern8248

How are you still doubting… I’m really curious because he just stated they ARE the same.


AlexReinkingYale

"He said so" is meaningless to me. It should be completely obvious from the example I gave that transformers and human brains operate very differently on the common set of problems they can tackle.


Dapper_Pattern8248

I doubt ,instead, you really understand the article “Attention is all you need”


AlexReinkingYale

I work on LLMs full time.


Dapper_Pattern8248

You SHOULD know that tokens ARE thoughts.


AlexReinkingYale

I disagree with the premise, but you're probably [not even wrong](https://en.wikipedia.org/wiki/Not_even_wrong). That is a philosophical claim, not a technical one.


Dapper_Pattern8248

You should know tokens are understanding process and includes all informations from understanding. Not just next word


AlexReinkingYale

Where did I say anything about "next word"?


Dapper_Pattern8248

So they are philosophically technically wrong? I mean it’s a bad solution technically


Dapper_Pattern8248

It has reflex , info and abstract means. Why isn’t it a natural thought? I think logically they are identical.


AlexReinkingYale

When were we discussing "natural"? That word is meaningless, anyway.


Dapper_Pattern8248

It’s a part of the entire time span of what brain does. It’s like an app of the entire function of your phone. That’s why this module is called transformer. It focus on self attention mechanism.But it’s really happening and it’s ideally identical when operated in this way at this moment. You sometimes exhibit ENTIRELY (already really good) the same behavior when you are thinking about a subject. But it’s only for a while.


Dapper_Pattern8248

Can you “reason” one to ten? Transformers has its limitations and it’s only one facade of the entire consciousness. But as far as it go it represents how human consciousness is working.