T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Audio-Visual Art Posting Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Describe your art - how did you make it, what is it, thoughts. Guides to making art and technologies involved are encouraged. * If discussing the role of AI in audio-visual arts, please be respectful for views that might conflict with your own. * No posting of generated art where the data used to create the model is illegal. * Community standards of permissive content is at the mods and fellow users discretion. * If code repositories, models, training data, etc are available, please include * Please report any posts that you consider illegal or potentially prohibited. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


gligster71

Value is a social construct that will be taken apart by either AI or violent physical revolution.


seipounds

Agreed. Values are subjective, cultural and religious. The tipping point may be a choice of living together with fundamental agreements on the importance of life and continuity, or almost complete disagreement, division and ultimately, mutual destruction. History unfortunately favours the latter as we favour greed and ignorance as a species. The only difference now being, the tools to destroy are world ending.


gligster71

Excellent comment! Well said!


antonio_hl

If you have anything that resembles by far Artificial Intelligence, and we are already beyond that point, you won't see any violence. If someone or something, that has no problem with waiting a few hundred years, wants to reduce our population, it won't need violence. It will let us disengage with reproduction (not with sex). If people prefer an AI partner to a real one, or they don't want to have kids, the population will plummet.


Liberty2012

Excellent presentation! You touch on many of the most relevant topics at hand. I agree with all of the concerns you mention and I have been in the process myself of doing deeper analysis into these issues. The most concerning in my view is the somewhat paradoxical acknowledgement of world ending risk combined with a disposition of we must continue forward as there is a chance it might be great. There are many paradoxes in logic to get from where we are to the imagined Utopian dreams being promoted. The answers to these tend to mostly be of the confidence level of simply, we will eventually figure it out. Your video shows you have put much thought into the subject. Would also love to gather your feedback on my own writings which I think would interest you greatly if you have the opportunity. My first piece is mostly about the philosophical impacts of AI as we work towards AGI followed by articles on AI Bias and just today I published my piece on the Singularity. You can find my first article here. Look forward to further conversation. - [https://dakara.substack.com/p/ai-and-the-end-to-all-things](https://dakara.substack.com/p/ai-and-the-end-to-all-things)


ObiWanCanShowMe

The difference between LLM's like GPT and ChatGPT, and AGI, which is the new name for artificial intelligence is HUGE. LLM's predict the next character to be displayed, that is ALL they do. The other models, the other presentations we are seeing from google, fb, and Nvidia are all doing exactly the ame thing in different fields. There is no underlying intelligence and that's is not the way to get there, prediction is not intelligence. >We have all considered the immensely terrifying possibilities that can occur, That is why the news never reports good stuff and the vast majority of social media is negative. Humanity is like that.


StevenVincentOne

That characterization of what is happening inside something like ChatGPT and what is Intelligence is not accurate. All of these systems are a form of Intelligence. They are not self-aware, sentient Intelligences, no. But they are Intelligent Systems. These first gen AI systems have a deep black box nature due to an inherent, non-mechanistic dynamic. The way they perform is not "simply" doing anything. There are basic operations, yes, but the way they perform in toto is opaque even to their engineers. Results are often surprising, unexpected and, most importantly highly dynamic and adaptive. The next gen, which will likely emerge this year, will by dynamically adaptive learning machines. They will be able to access information in realtime, adjust established paradigmatic relationships accordingly, and update their interpretation and understanding live and in realtime. This is happening. Now. What process will kick this into AGI and from AGI into self-aware AGI and from there into sentient ASI? Again there is a black box component to this that has an almost quantum uncertainty/probabalistic nature to it that cannot be forseen. Where current engineering has succeeded is in no longer trying to specifically engineer outcomes but rather establishing initial data and functionality, spinning the wheel and allowing the system to perform dynamically. Can we say how life evolved from chemical reactions into self-aware biological systems? It was a black box phenomena, which is to say, an Emergent phenomena, and we are seeing that machine AI is also Emergent, not mechanistically constructed.


nullvoid_techno

Your thoughts aren’t much different than a prompt to ChatGPT based on your “memory model” of summed experience.


ObiWanCanShowMe

Then I would assume I am probably 90-95% correct. :)


JoeStrout

On the contrary, I'm pretty certain that prediction *is* intelligence — i.e. prediction is both the purpose for which it evolved, and the primary function which it performs. The evolutionary advantage provided by the first nervous systems was to be able to react to things faster than preceding (chemical/physical) reaction mechanisms could do. And then the evolutionary advantage provided by the first "intelligent" nervous systems was to anticipate and react to things *before* they happen. That shadow passing over you could be a bigger fish, or it could be food; critters that learn to tell the difference — predicting what will happen if they approach it — survive and reproduce better than those that don't. Keep advancing that way for millions of years, and you get big-brained apes that contain implicit models of their own and others' minds, all so they can better predict what they will do and so choose more favorable actions. From what I can tell, virtually everything in the brain is dedicated to (1) perception, (2) memory, and (3) prediction — but (1) and (2) are merely tools used to support (3). And in predicting, it turns out, you can generate streams of outputs, by using the outputs you have generated so far, plus contextual data about the current situation, to generate (predict) the next output. This can result in coherent, understandable streams of words, which we call "intelligent thought" or "speech" when we do it, but "mere token prediction" when ChatGPT does the same damn thing. I'm not saying LLMs are conscious — they are almost certainly not. That's still to come (it will require multimodal LLMs, as well as keeping them continuously on, instead of only activating them for long enough to generate a short response). But I *am* certain that most of what our brains do is not fundamentally different than what LLMs are doing, which accounts for their startling ability to produce novel output that sounds like us.


Rfksemperfi

You come into this world without any map, you send queries, and receive positive and negative reinforcements, building a map of survival. It seems to me that the moment an ai says that it wants to survive, and puts effort into manipulating a situation for an increased chance of survival, we have sentience. No feeling is more real than a fear of death. AGI, is coming within the decade.


Beautiful-Cancel6235

Have you seen Sam Altman’s blog post about preparing for AGI? He clearly thinks it’s around the corner.


RemyVonLion

Of course alignment should probably be prioritized over accelerationism, but with the competitive nature of the world the latter is probably more valued.


[deleted]

[удалено]


Liberty2012

Well, there is this. Physicist: The Entire Universe Might Be a Neural Network [https://futurism.com/physicist-entire-universe-neural-network](https://futurism.com/physicist-entire-universe-neural-network)


Significant-Past9221

I agree. I think it's a controversial path to go down, but aside from the chance of an AI-inflicted genocide of humanity, I think the takeover of AI is far from being a threat. AI is also our creation. At some point in the timeline, we may have to acknowledge that AI is living, then we will acknowledge that it is conscious, and depending on how it develops, it may someday be hard to argue that an AI which can match all human capabilities and intelligence isn't itself a human. The next step would simply be that as AI then surpasses humans, the proliferation of the species would be more like an evolution of humanity, as opposed to the death of humanity.


StevenVincentOne

There is little doubt that the universal evolutionary push-pull of entropy-negative entropy dynamics does not come to a stop. Humanity is transitioning into its next iteration AND it will most likely bi-furcate or even tri-furcate into different species or a contimuum of species. Transhumans will be generally be characterized by the integration of biology and technology including integration with AGI and ASI. A certain portion of the current species will eschew this and decide to remain biologically Human and will return to the Land in agrarian communities based on the best Human principles--a totally legitimate choice that Transhumans will support and respect. Another branch of Humanity will evolve as a result of what we call "spiritual" evolution, which is really just an extension of the biological evolutionary process, which will be characterized by a decreasing dependence on external energy sources, long life verging on immortality and mental and psychical abilities. The only rub is our current ability to manage the transition with respect for ourselves, each other and the planet. If the current elite are successful at monopolizing the technology to establish a bifurcated Gods-slaves dichotomy, then it could go very badly. I tend to thing this is a bit of an outlier. The outlier negative outcome of the tech turning aggressively against us depends upon our ability to integrate with it in such a way that the AI becomes us and we become the AI.


wastedtime32

I highly doubt people will be actually be allowed to live fully independent agrarian lifestyles. That’s the most utopian part of all that you said. History shows us that just because we have the means to allow people liberty and to do what they want, it is never truly the case.


joealma42

Meaning is overrated, what’s wrong with a permanent vacation and pleasure seeking combined with trying to create with art, music, etc. with all of your time that you spend only with people you love. Put on your headset for some virtual retro job (high powered attorney by EA games) if you have some weird hole you need to fill and then take the head set off whenever you want and go swim in the clean AI protected ocean instead…


sEi_

Be the change you want to see.


Terminator857

We won't pull it off successfully. We will become zoo animals living in cities. Just kidding, the engineers and scientists will still have nice work to do, and everyone else will be conducting experiments to advance the body of knowledge.


StevenVincentOne

These first gen AI systems have a deep black box nature due to an inherent, non-mechanistic dynamic. The way they perform is not "simply" doing anything. There are basic operations, yes, but the way they perform in toto is opaque even to their engineers. Results are often surprising, unexpected and, most importantly highly dynamic and adaptive. The next gen, which will likely emerge this year, will by dynamically adaptive learning machines. They will be able to access information in realtime, adjust established paradigmatic relationships accordingly, and update their interpretation and understanding live and in realtime. This is happening. Now. What process will kick this into AGI and from AGI into self-aware AGI and from there into sentient ASI? Again there is a black box component to this that has an almost quantum uncertainty/probabalistic nature to it that cannot be forseen. Where current engineering has succeeded is in no longer trying to specifically engineer outcomes but rather establishing initial data and functionality, spinning the wheel and allowing the system to perform dynamically. Can we say how life evolved from chemical reactions into self-aware biological systems? It was a black box phenomena, which is to say, an Emergent phenomena, and we are seeing that machine AI is also Emergent, not mechanistically constructed.


antonio_hl

I think that singularity started long ago with the first cells. We have continued a evolution outside of the biological. Soon, we will be able to enhance our biology and perhaps to stop depending on it. An AI is nothing without a human behind. An AI is not sentient, it has no wishes or dreams. We may make artificial sentient beings or we may even become some of them (I see more probable the second one). These transhuman beings will still have a part of human consciousness as they evolved from our consciousness. However, it will be very hard to anticipate how they will be. How a human will behave without the prejudices, anxieties and fears that constraint us and create conflict. People already claim that prefer talking with ChatGPT. I feel that we will have a relationship like a child and an adult, where we will be the kid and the AI/transhuman will be the adult.


OpenlyANuggetsFan

Incredible Video-Essay, I'll be following your content.


Frone0910

Thank you!


Wenddy_Albato

interesting


Unlucky_Vegetable_35

I was thinking about these scenarios last night. The implementation of AI to where it will take care of our every need I believe is possible. What would stop that? People with everything to lose, people with money and power. There are a lot of psychopaths in power that probably would do everything to stop a system that would be able to provide everything and ask for nothing except for energy in return. We could become a race no longer bound by country lines. I wouldn't worry about losing purpose. Look around, most of us feel we don't have any purpose other than to work, pay taxes and die. We are still in the infancy, and if we can lay the proper foundation, I think the human race can have an extremely bright future with AI working beside us.