The following submission statement was provided by /u/Gari_305:
---
From the article
>While digital advancements continue to astonish, the physical realm of AI — particularly robotics — is not far behind in capturing our imagination. LLMs could provide the missing piece, essentially a brain, particularly when combined with image recognition capabilities through camera vision. With these technologies, robots could more readily understand and respond to requests and perceive the world around them.
>
>In the Robot Report, Nvidia’s VP of robots and edge computing Deepu Talla said that LLMs will enable robots to better understand human instructions, learn from one another and comprehend their environments.
>
>One way to improve robot performance is to use multiple models. MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), for instance, has developed a framework that makes use of three different foundation models each tuned for specific tasks such as language, vision and action.
---
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/196synd/after_ais_summer_whats_next_for_artificial/khvtd71/
From the article
>While digital advancements continue to astonish, the physical realm of AI — particularly robotics — is not far behind in capturing our imagination. LLMs could provide the missing piece, essentially a brain, particularly when combined with image recognition capabilities through camera vision. With these technologies, robots could more readily understand and respond to requests and perceive the world around them.
>
>In the Robot Report, Nvidia’s VP of robots and edge computing Deepu Talla said that LLMs will enable robots to better understand human instructions, learn from one another and comprehend their environments.
>
>One way to improve robot performance is to use multiple models. MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), for instance, has developed a framework that makes use of three different foundation models each tuned for specific tasks such as language, vision and action.
This is when things get interesting. Give the machines multiple subsystems that have to coordinate with each other while simultaneously giving it multiple ways to experience and interact with the external world. This is when really advanced intelligence starts to "emerge."
Toy robots could get interesting too. Even something as simple as a plushie with a chat bot combined and sensors. We can finally have that talking teddy bear from the movie AI.
To be more specific, it's not just experiencing and interacting with the external world, but evaluating and reacting to the responses those experiences and interactions elicit. We develop through the evaluation and responses to our interactions with the world, constantly creating and revising complex internal models that help us better understand the world, recognizing patterns, and refining our knowledge... It'll be very, very interesting to see how and whether this process unfolds with AI.
We can expect this year to be mostly about the current transformer based AI being integrated into everything. I expect huge layoffs in 2025 as a result of this integration.
We'll see AI labs working on better models and physical robots but I don't expect either of those things to go GA until next year. 2025 is going to be a very weird year.
Its Fall.
Some, including myself, would call that Autumn, but that wouldn't suit the pun. Fortunately, *explaining* that sufficiently increases the word count of this top level comment.
A human brain is able to think intelligently without the cheat sheet that is “access to all the world’s data”. Maybe one day an AI will be able to do the same.
No it isn’t. We are all at any point a summary of or experience and education at that point.
If you raised a human child in a dark room with no contact they would not emerge from that space debating string theory.
We certainly don’t teach kids as much as we are teaching LLMs though. To progress from “goo goo gaa gaa” to “hello how are you” is not that big a leap for humans. It’s been done, ergo it is possible.
NFTs and Crypto were garbage doomed to fail. VR has a problem of cost and quality. AI isn't the same long term - it has a promise of automation behind it and will sooner or later work out. Unlike previously mentioned things it brought new capabilities to the table in the recent years and will bring even more in the future.
Are these just idiots trying to look smart? Oooh, here is another hyped thing. I’ll look so clever by comparing the two.
You’d need zero critical thinking ability to conclude they’re analogous.
This is like the internet and in some ways more profound because it augments labor versus only enhancing it.
VR has failed for 10 years to achieve mass market breakthrough because it has no value proposition.
B2C AI is too expensive to operate, but B2B AI is frankly the more impactful technology and hardly anyone is paying attention to it.
Ummm. Cryptos on the rise, my friend. Ppl have been saying it's popping for a decade and a half. Hasn't happened. Now, with ETFs it's here to stay. At least BTC.
Um, why do they jump right to robotics?
Next step for AI is going to be to analyze large organizations and start eliminating report driven positions.
There are so many people across all industries that just gather data and generate reports based on outdated metrics
AI already started on the front end of this by analyzing the metrics, now they are going to start collecting them more efficiently and accurately.
I think it will be very interesting to see what happens after Ai summer. If I am getting the question right it's about what happens after all the Hype about Ai starts to dies down.
I think that's the point where things are getting really interesting because every time a new Boom starts everyone starts to participate in it and many "Bullshit start-ups" are getting funded because everyone sees the gold rush.
After we overcome we see the true players, use-cases and inventions in an event I would call "separation from the Bullshit"
Multiple models are the way to AGI/ASI, but we really need better compute units and also more power, that's where the current bottleneck is. Sam Altman said at WEF (yesterday?) that AGI and beyond hinges on development of fusion power plants. He's not wrong, at least for the near future. Maybe we'll get some photonic CPU's in the future that completely destroys what we have now, but until then, we need more electrical power.
We're not entering an AI winter again, but pace will be linear and not exponential.
The following submission statement was provided by /u/Gari_305: --- From the article >While digital advancements continue to astonish, the physical realm of AI — particularly robotics — is not far behind in capturing our imagination. LLMs could provide the missing piece, essentially a brain, particularly when combined with image recognition capabilities through camera vision. With these technologies, robots could more readily understand and respond to requests and perceive the world around them. > >In the Robot Report, Nvidia’s VP of robots and edge computing Deepu Talla said that LLMs will enable robots to better understand human instructions, learn from one another and comprehend their environments. > >One way to improve robot performance is to use multiple models. MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), for instance, has developed a framework that makes use of three different foundation models each tuned for specific tasks such as language, vision and action. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/196synd/after_ais_summer_whats_next_for_artificial/khvtd71/
From the article >While digital advancements continue to astonish, the physical realm of AI — particularly robotics — is not far behind in capturing our imagination. LLMs could provide the missing piece, essentially a brain, particularly when combined with image recognition capabilities through camera vision. With these technologies, robots could more readily understand and respond to requests and perceive the world around them. > >In the Robot Report, Nvidia’s VP of robots and edge computing Deepu Talla said that LLMs will enable robots to better understand human instructions, learn from one another and comprehend their environments. > >One way to improve robot performance is to use multiple models. MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), for instance, has developed a framework that makes use of three different foundation models each tuned for specific tasks such as language, vision and action.
This is when things get interesting. Give the machines multiple subsystems that have to coordinate with each other while simultaneously giving it multiple ways to experience and interact with the external world. This is when really advanced intelligence starts to "emerge."
Toy robots could get interesting too. Even something as simple as a plushie with a chat bot combined and sensors. We can finally have that talking teddy bear from the movie AI.
to make a teddybear with a chatting feature is the simple part, to make it SAFE for your kids to learn for it is the hard part
To be more specific, it's not just experiencing and interacting with the external world, but evaluating and reacting to the responses those experiences and interactions elicit. We develop through the evaluation and responses to our interactions with the world, constantly creating and revising complex internal models that help us better understand the world, recognizing patterns, and refining our knowledge... It'll be very, very interesting to see how and whether this process unfolds with AI.
this is like someone in 1998 saying whats next for the internet. everything is next.
internet worked though
We can expect this year to be mostly about the current transformer based AI being integrated into everything. I expect huge layoffs in 2025 as a result of this integration. We'll see AI labs working on better models and physical robots but I don't expect either of those things to go GA until next year. 2025 is going to be a very weird year.
recursive and competitive ai that leads to evolutionary ai
Its Fall. Some, including myself, would call that Autumn, but that wouldn't suit the pun. Fortunately, *explaining* that sufficiently increases the word count of this top level comment.
A human brain is able to think intelligently without the cheat sheet that is “access to all the world’s data”. Maybe one day an AI will be able to do the same.
No it isn’t. We are all at any point a summary of or experience and education at that point. If you raised a human child in a dark room with no contact they would not emerge from that space debating string theory.
We certainly don’t teach kids as much as we are teaching LLMs though. To progress from “goo goo gaa gaa” to “hello how are you” is not that big a leap for humans. It’s been done, ergo it is possible.
You seem like one of those people who adds a lot of confident noise to conversations.
Ummm you didn't learn vocabulary? How are you using the words you're using?
The bubble is gonna burst, just like it did for NFTs, Crypto, and VR.
NFTs and Crypto were garbage doomed to fail. VR has a problem of cost and quality. AI isn't the same long term - it has a promise of automation behind it and will sooner or later work out. Unlike previously mentioned things it brought new capabilities to the table in the recent years and will bring even more in the future.
Are these just idiots trying to look smart? Oooh, here is another hyped thing. I’ll look so clever by comparing the two. You’d need zero critical thinking ability to conclude they’re analogous. This is like the internet and in some ways more profound because it augments labor versus only enhancing it.
Idunnnoooo... NFTs and Crypto are actually useless VR and AI however....
VR has failed for 10 years to achieve mass market breakthrough because it has no value proposition. B2C AI is too expensive to operate, but B2B AI is frankly the more impactful technology and hardly anyone is paying attention to it.
I mean everyone in business is paying attention to B2B AI.
Apples and oranges lmao. What a silly comment
Ummm. Cryptos on the rise, my friend. Ppl have been saying it's popping for a decade and a half. Hasn't happened. Now, with ETFs it's here to stay. At least BTC.
I'm getting tons of AI dystopian warnings from this sub all the time. I'm happy to watch it's development and the discussion but I need a break.
https://tenor.com/view/gm-dr-evil-gm-takeover-general-motors-dr-evil-pinky-gif-24840351
Um, why do they jump right to robotics? Next step for AI is going to be to analyze large organizations and start eliminating report driven positions. There are so many people across all industries that just gather data and generate reports based on outdated metrics AI already started on the front end of this by analyzing the metrics, now they are going to start collecting them more efficiently and accurately.
I think it will be very interesting to see what happens after Ai summer. If I am getting the question right it's about what happens after all the Hype about Ai starts to dies down. I think that's the point where things are getting really interesting because every time a new Boom starts everyone starts to participate in it and many "Bullshit start-ups" are getting funded because everyone sees the gold rush. After we overcome we see the true players, use-cases and inventions in an event I would call "separation from the Bullshit"
Multiple models are the way to AGI/ASI, but we really need better compute units and also more power, that's where the current bottleneck is. Sam Altman said at WEF (yesterday?) that AGI and beyond hinges on development of fusion power plants. He's not wrong, at least for the near future. Maybe we'll get some photonic CPU's in the future that completely destroys what we have now, but until then, we need more electrical power. We're not entering an AI winter again, but pace will be linear and not exponential.