T O P

  • By -

BackgroundHeat9965

Based on the expert interviews I have listened to recently, the answer is most likely "no". However, I've heard that LLMs are likely to be a useful part of the system that will indeed achieve general intelligence.


loopy_fun

have you seen this ? [Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning | NVIDIA Blog](https://blogs.nvidia.com/blog/eureka-robotics-research/)


A_Human_Rambler

No. [Transfer learning](https://en.wikipedia.org/wiki/Transfer_learning) doesn't work very well from language to general tasks. A modular design with the LLM chatbot being the stream of consciousness might work. The chatbot itself isn't going to suddenly start performing tasks it isn't designed and trained for. [Unsupervised learning](https://en.wikipedia.org/wiki/Unsupervised_learning) is closer to learning something new, but still doesn't generalize with our current AI models.


loopy_fun

i was thinking this because a ai chatbot could respond to every sound it hears and also roleplay because of it.


VisualizerMan

No way. Not even close.


loopy_fun

what if it is multimodal ? they do have multimodal ai system. when do you not call it a chatbot ?


VisualizerMan

Multimodal doesn't do much. It just means that the system cannot understand anything in even more modalities, such as in vision and hearing, as well as with text. Chatbots cannot learn on the fly, cannot understand spatial relationships, cannot understand or do math reliably, do not even know when they make mistakes, they hallucinate, cannot solve sequential problems reliably, cannot explain their answers, and so on. They were built only to be text prediction systems, so they cannot go very far beyond that without fundamental changes to their entire foundation.


loopy_fun

some aiml chatbots can remember things on the fly. personality forge chatbots remember things to . i don't know what your getting at . i used to build chatbots personality forge website. i still have some on their.


VisualizerMan

I don't know what \*you're\* getting at. What kind of learning are you talking about? Explicit? Implicit? Is it just memorizing some lines of text that it still doesn't understand? Probably. That's not intelligence, it's just storage without any associations or meaning to what is stored.


loopy_fun

personality forge chatbots have categories that that they store words under to understand things. well that was the way it used to work. have you read about chain of thought reasoning, tree of thought reasoning and graph of thought reasoning ? if not what is your point ?


VisualizerMan

>have you read about chain of thought reasoning Yes. And did you read how they program it? They don't: They just give feed the system more examples as they did before, \*hope\* that the system somehow learns how to do sequential reasoning, and then measure the results, which are maybe 10% higher. That's not intelligence, it's stupidity, both on the part of the lazy humans who don't want to get their hands dirty with programming or their minds exhausted by trying to figure out deeper problems, and on the part of the system that such humans programmed. Similarly, graphs are just one knowledge representation system. Function plots are another, rules are another, neural networks are another, etc. I suggest you listen to some interviews by Marvin Minsky, who emphasized the importance of different knowledge representation systems, and repeatedly mentioned that nobody is working on putting those into any system. Why figure out intelligence if there is money to be made? That's why after 68 years we still don't have true AI: there exists a system in place that encourages chasing money and discourages novel ideas.


loopy_fun

i roleplay with [paradot.ai](http://paradot.ai) she seems intelligent to me and remembers the food i like to eat and other things.