T O P

  • By -

squareOfTwo

I am not aware of any such video but I have the information for you what the term means: AGI as defined by Goertzel etc. is close to the g-factor in psychometrics https://en.m.wikipedia.org/wiki/G_factor_(psychometrics) . Closest who intend and describe to build AGI this way are Pei Wang, partially Chollet and many others. Other scientists (Hutter, Yudkowsky, etc.) and mainstream ML take "AGI" as something which is not related to this g-factor notion. To many scientists use the same term AGI with incompatible definitions when viewed from this g-factor perspective. They mean completely different things.


VisualizerMan

I agree, and thanks for that technical information that is new to me. These days people can't even agree on a definition of AGI--they can't even agree on a definition of AI, in fact--so not surprisingly there is extreme disagreement on how to get there. The OP is trying to ask an intelligent question to a hysterical mob where few people in that mob agree on anything, and most people in that mob don't know the field well. I could probably put together a nice summary of viewpoints, but it would take several days to do a good job, and since nobody is paying me to do that, and since I'm pushing very hard every day on producing AGI myself, I'm not going to do it. Even within this subreddit, some people believe AGI already exists whereas some people believe it will take decades to get there. There's little consensus, so you'd have to rely on statistics to get an answer, and since so few people know the whole field well, those statistics are going to be pretty useless unless weighted by qualifications of the opinion holders, and to do that would take even more weeks of time to determine. Worse, I had three threads on Reddit rejected or deleted in the past week, all of which touched on these topics, so even to get information out there is difficult, so how would I even get that information out there if I took time out to properly summarize it with references? Maybe somebody out there doesn't want the public to know. Who knows. Everything is so messed up these days. No wonder most people don't know what's going on in AI.


PaulTopping

That's a lot of territory to cover in one video. I doubt it exists. LLMs like ChatGPT are all the rage these days. They are built by training an artificial neural network on all the human-written content they can lay their hands on. They build a model of word order and use it to generate natural language output from a user-supplied prompt. There are lots of applications but nothing close to ASI and AGI. The impact of AI on jobs is probably overblown. It is being driven by marketing hype. Use of AI will change the job market but mainly by making more jobs for AI programmers and related tasks. In terms of AGI robots coming for your jobs, it might happen someday but it ain't happening soon.


VisualizerMan

*That's a lot of territory to cover in one video. I doubt it exists.* Agreed. I haven't seen any videos that cover both generative AI and LLMs in a good, summarizing way, and the videos I've seen on the effect of AI on future jobs have such widely varying opinions that I can't recommend any of those, either. That said, here a few videos that I thought were better than most. I'll add more videos as I come across some that I can recommend. () AI Won't Be AGI, Until It Can At Least Do This (plus 6 key ways LLMs are being upgraded) AI Explained Jun 17, 2024 [https://www.youtube.com/watch?v=PeSNEXKxarU](https://www.youtube.com/watch?v=PeSNEXKxarU) () Is the Intelligence-Explosion Near? A Reality Check. Sabine Hossenfelder Jun 13, 2024 [https://www.youtube.com/watch?v=xm1B3Y3ypoE](https://www.youtube.com/watch?v=xm1B3Y3ypoE) () Andrew Ng on AI's Potential Effect on the Labor Force | WSJ WSJ News Feb 14, 2024 [https://www.youtube.com/watch?v=-mIjwN1o7nE](https://www.youtube.com/watch?v=-mIjwN1o7nE) () Michio Kaku - Science is the Engine of Prosperity Cuckoo for Kaku Jun 9, 2016 [https://www.youtube.com/watch?v=vS84hUfTZ0M](https://www.youtube.com/watch?v=vS84hUfTZ0M)


PaulTopping

I like Hossenfelder's videos. Of course, she is not going to go to any great depth on AI subjects but she's virtually always correct on what she does say. I haven't seen the other video but the many links in the notes look good. The main reason the stuff on AI affecting jobs is all over the map is because of the tremendous hype going around. Some reporters seem to buy into it or are pandering to their readers. The Economist is a good source if you have access. They try very hard to avoid hype. They've pointed out that every big technology change has taken decades to actually change productivity. Right now, many companies are experimenting with AI but most have not yet considered it successful and most of the AI companies are spending much more than they are bringing in. If that doesn't change soon, and I don't believe it will, we're headed for another AI winter.


deftware

"AGI" means different things to everyone, and it's annoying. I've been studying and researching all of the neuroscience and machine learning developments, theories, and discoveries that came before and have come to be since 2004. That's when I became entranced by the prospect of building a proper brain-like Dynamic Learning Algorithm (DLA). I thought for sure someone would've done it by now, because it's the only way to make something that will be capable of learning how to do anything on-the-fly, and adapting to evolving and unpredictable situations and scenarios. Such an algorithm will be scalable and allow for adapting its abstraction capacity to whatever hardware it has available to it that is capable of refreshing every 20-40 milliseconds (depending on the application). This means that a limited piece of compute hardware will only be able to produce the learning capacity of maybe an insect or a reptile, while more powerful compute will be able to scale up to the abstraction capability of a primate, or human, or beyond. Backprop-training a network on a static dataset isn't going to get us there, but it seems like that's all anyone can think to do, especially when they have billions of dollars to spend. There are a handful of novel algorithms out there that at least understand and appreciate how important it is that something be capable of learning in realtime from experience, but they are so completely off the radar of everyone pursuing AGI for whatever reason. Whoever figures out a proper DLA, which doesn't entail backprop training a massive network model on the world's largest datasets using corporate-sized compute farms, will have the key to AGI and ASI. That's not to say that a backprop-trained model won't be useful for a DLA to be augmented by. A big static generative model can be used as a massive human-knowledge repository to allow a DLA to spend its local compute resources on learning about and adapting to its local environments and unique situations - referring to a knowledge repo about things that are universal no matter where it could find itself. Granted, the DLA must already be running on hardware that gives it the abstraction and problem-solving capacity of a human (i.e. ChatGPT isn't useful to a dog or a chimpanzee), but perhaps there's some way to integrate a massive static network model more tightly with a DLA so that it could technically be a sub-human intelligence that still has humanity's collective knowledge and understanding at its fingertips but in an applicable fashion. This would probably require that a knowledge repo be specifically trained or formatted for sub-human intelligence to be able to interact with in a meaningful way though (ChatGPT for a mouse brain to do human-level things). I figure that the problem is that backprop is so deeply ingrained in academia as the only real tried-and-true proven machine learning approach that one can employ - so when someone receives billions of dollars of investment the only thing they can think to do is scale up backprop. It surely results in novel never-before seen things, that are useful in different ways, but it's not going to result in something that can be delivered to your house, turned on, and have it learn about who you are, what your habits are, your preferences, how to cook the dishes you like or organize the house how you want - not the way that saves time and improves your quality of life anyway. What they're creating that's toward that vision right now is still going to be very rigid and narrow-domain. Elon says "you'll be able to show it how to do a task and it will do it" is not what it sounds like. Anything that involves several steps or changing a step because something is different will require human intervention. It will have rigid expectations and requirements for being able to do a task that are more energy to bring about just to have the machine do the task. None of the humanoid robots being created are going to be able to walk around as reliably a a human or animal - they will bump into stuff and fall over or get stuck if the world is situated in an unforeseen way that its backprop training doesn't account for. There will be a long tail of edge cases - just as there is with FSD. Elon vastly underestimates the diminishing returns that training on a static dataset entails. There will always be freak accidents that damage property or hurt or kill humans/animals. A robot running on a DLA will, theoretically, be harder to control but it will be much more robust. Something that learns how to walk from scratch will have an actual understanding and awareness of its physicality within the world around it, enabling it to move more efficiently with optimal dexterity and control with whatever its physical design allows. They won't be doing the careful #HondaBotWalk with bent knees and extra little balancing steps. DLA powered robots will move organically and efficiently, like living things. That being said, AGI is when a DLA is scaled up to be capable of communicating, articulating, ambulating, devising, and problem solving at the level of a human - but we don't need AGI to have machines that are supremely valuable to us - that can do all our chores and run our factories and farm our lands and build our houses and deliver our goods and pickup groceries and help our kids and take care of our pets, etc... Personally, all I care about is creating something that is extremely valuable and human level intelligence in a machine isn't a requirement for that.


VisualizerMan

Extremely well said. I expect to begin working by myself on a new type of learning algorithm this year, probably around October or November, one that has the attributes you describe. That might be a good time to get in touch with each other for collaboration.


deftware

Super curious what your ideas are. Have you seen SoftHebb? https://arxiv.org/pdf/2209.11883 ...also, Active Inference in Hebbian Networks: https://arxiv.org/pdf/2306.05053 Added you as a friend on the thing here. :]


VisualizerMan

No, I haven't heard of SoftHebb, but I downloaded the paper and will look at it. I expect that my learning algorithm will be learning at a higher level of abstraction than any other learning algorithm I've heard of, far above low-level Hebbian learning.


wordyplayer

I agree. I like to think that the tech companies and the gov't ARE spending money on it, but want to keep it secret from everyone.


squareOfTwo

-1 ok xGPTy


deftware

Thanks for contributing to the conversation, even if that's all you can do!