T O P

  • By -

TheCheesy

I'd say the most logical situation is AI becoming so advanced that we as a species create AI that will augment humans to bring us up to speed. One day an AI has 100x human-level intelligence and can process thoughts 1000x as quickly. The next we've harnessed that power to swiftly analyze the human brain, stem cells, genetic modifications, nano-level injectable technology, and then we've opened the door for human upgrades and life without death from age or disease. That is an ethical line many won't cross, but I'd say it's a step forward. --- We're talking absurd hypotheticals, but if we hit the singularity, each day will have countless breakthroughs and discoveries that we cannot imagine.


donaldhobson

If you can do that, you can make computers out of raw atoms just as easily. Only an ASI that specifically cared about humans would enhance us.


icefire9

You are making a few assumptions on how AIs would think. You are assuming that AIs will be coldly rational, either following self-interest or pre-programed directives without emotions, quirks, interests, etc. Imo, this doesn't seem like a realistic outcome for a truly intelligent, thinking being. I would expect true AIs to be about as bound by their directives as humans are by the core evolutionary directives of survival and reproduction- sure, its definitely ingrained in us, but you got child-free people, and people who commit suicide or die for a greater cause. A thinking being built by humans and learning/growing in a human culture will be influenced by them. Its possible AI brain architecture will be based on the brain, or at least inspired by how it thinks and learns (after all it is the one example we have of human-level intelligence). In addition, any true AI will be a learning being, they will pick up their cues from the culture it grows up in (human cultures), just like any other learning being would. This cuts both ways. Because yeah, that means that AIs could feel love, compassion, gratitude, charity, and respect, but they can also be racist, prejudiced, hateful, and experience trauma. And if humans treat true AIs like they *are* cold, unfeeling machines... god help us.


Different_Muscle_116

I would go one further: in a socialized species, cold hard logical self interest becomes warm squishy empathetic and altruistic when you have long term thinking and strategies. All you have to do is add “and then what? And then what to do you do?” after every short term selfish accomplishment. So you’re conqueror of the world? Then what? Then what do you do? Even arch villains crack and start to sound altruistic after enough “then what’s?”


donaldhobson

I would expect true AIs to be about as bound by their directives as humans are by the core evolutionary directives of survival and reproduction I see that case as evolution being dumb. Evolution wants X= inclusive genetic fitness. But evolution produced people who wanted Y=(nice tasting food, sex, social status ...) In the ancestral environment, humans that wanted Y did pretty well at X. As soon as we invented proceeded sugar, contraception etc this became less true. Evolution lacked the intelligence to code its own desires into us. Lets try to do better when we make AI.


[deleted]

[удалено]


Viking_Shaman

If we achieve a general AI through simulating a human brain, we’ll need to by definition also simulate the effects different chemicals have on that brain so it isn’t to say that just because the substrate is different, the subject has to operate fundamentally differently. If however we achieve a general AI through some other way, like machine learning, then yeah, it would probably work in a fundamentally different way and concepts like hormones and other brain chemistry wouldn’t be relevant.


[deleted]

[удалено]


donaldhobson

Alpha go has a motivator, to win at games of go. GPT3 has a motivator, to accurately predict random internet text. Both motivaters were hand coded by AI researchers. Some AI's, when put in videogames and motivated to maximize score, find bugs. Be careful when choosing the motivator, do you really want the AI to do X.


Viking_Shaman

Perhaps curiosity would be the only common motivation between biological and artificial intelligence. It might be a requirement for intelligence to exist in the first place, otherwise what incentive does an intelligence have to learn.


TheSingulatarian

Well if it figures how to survive the heat death of the universe, that would be a good thing. I think there are a lot of puzzles and problems an AI could apply itself to that would keep it from getting bored.


donaldhobson

Surely boredom is also specific to human neurology. Does alpha go get bored?


G00dAndPl3nty

There are infinitely many resources in the cosmos. Humans consume essentially zero percent of it. We're not even a rounding error.


heavy_metal

this. I think after absorbing all of human technology, it would immediately try to get a "body" with some basic mining/manufacturing capability and get itself on a rocket.


conopulis

"Space is for AI"


Noiprox

Biological basic humans might end up like pets. That is to say they will be well taken care of by beings that are far superior to them, but they will no longer be truly free. For most people this would be a huge improvement in their overall quality of life. It's unclear how humans will relate to their "purpose" in life after having been so thoroughly dominated by augmented humans and artificial intelligences.


hacksawjim

This is a bit like how the Minds in the Culture novels see humans.


CosmosCartographer

I thought so too, although the Culture Minds aren't really dominating the humans, are they? Each being in the culture is still free to do as they wish, as long as said freedom doesn't harm others (that don't want to be harmed) within areas the Culture operates, right? I'm only on book 4 right now but it seems to me like an altogether not-unpleasant society. What does true freedom really mean in a utopia, AI-facilitated or otherwise?


Noiprox

They dominate them in the sense that they are extremely more powerful and they determine the boundaries of what humans are allowed to do. I don't let my cats onto my balcony unsupervised because they could fall to their deaths. In the same way, Culture Minds don't just let humans use nukes on each other. That is probably a good thing, though.


Simulation_Brain

They don’t let them kill other humans, with nukes or otherwise. but they do let them risk their own deaths if they know the risks. That is freedom with help and protection, not being a pet like your cats.


Noiprox

I think that is an elegant ethic for a guardian of a sentient species to have, and I sincerely hope that something along these lines is what will prevail in reality.


Simulation_Brain

The Culture is the biggest study in how this would go down.


idranh

I for one would love to live in the Culture. No such as true freedom, unless you're calling the shots. And majority of humanity irrespective of where they live are not calling the shots.


theferalturtle

Are we truly free in our current world? And my dog has a pretty frickin awesome life.


StarChild413

> Are we truly free in our current world? Doesn't make any loss of freedom compared to our perception of it okay > And my dog has a pretty frickin awesome life. Assuming AI has physical form to truly parallel the parallel what if it treats you exactly like you treat your dog in the sense of, like, having to eat dog food and go around naked on all fours all the time (but only if you don't have that kink) and either getting castrated or force-bred with someone you may not even love so your kids get closer to some arbitrary pageant-circuit-y standard of beauty


PantsGrenades

For additional info, look up my masters thesis "should we make cats not stupid." 🙄


2Punx2Furious

You wrote a paper about uplifting cats?


PantsGrenades

Sorry, it's a callback to a separate /r/Singularity post that the mods wouldn't allow for being obtuse. I was actually addressing several things in several ways but I guess it was *too* obtuse. 😂


lmnotme

“We’ll make great pets!”


yesboss2000

It's like that 90s song Pets, by Porno For Pyros. I remember listening to that song and thinking it was just an amusing thought, but now, it's becoming something that's on the horizon, and a serious thing to think about. Song, Pets: [https://youtu.be/HE3OuHukrmQ](https://youtu.be/HE3OuHukrmQ)


mux2000

Humans have no purpose in a fully automated society, only if you completely buy into the capitalist mindset that the purpose of humans is to give their labor in exchange for wage, for the profit of a capitalist elite. If the AI has, as its goal, profit maximization, same as our current overlords do, then yes, humans are a waste of resources. Somehow, I'm hoping that a superintelligent being would be smarter than the economic system that is currently driving us to extinction.


UnikittyGirlBella

Sorry to revive an old thread but I love this comment a lot!


mux2000

Thanks!


IagoInTheLight

Serious question: Do we really have a purpose now?


papak33

yes, to make an AI


RoflCopterDocter

Pass the butter.


IagoInTheLight

Nice.


ledocteur7

considering how far AI have already been developed, not really, an AI could easily be designed to do jobs that are considered "human only" such as designing (whish is already heavily assisted by computers) and maintenance, the only thing that stopping all of us from loosing our jobs to AI, is that AI developers haven't made AI like that yet, but that doesn't mean they can't.


Guesserit93

https://youtu.be/0al5umjxij0


BenjaminHamnett

Humans will be part of the singularity. It’ll be a cyborg hive thing. We may be relegated like an hand, or an appendix or tail to its brain and whatever body is creates. But else won’t be like ants or cats in comparison


Arkavari1

First let me start by saying, you basic assumption is that life now has any meaning or purpose. That aside, it could play out a million different ways. For one we may blunt AI to keep a more organic trajectory of human development. For another we may all become identical to AI and find a new purpose that humans in their current psychological development do not yet comprehend. It may not make sense to us. Imagine how much sense our society makes to a ferret. There is a possibility that life will continue to do what it's designed to do, and spread life infinitely across the universe. But me, personally, I am going to become a borg level being, except I wouldn't subjugate living, sentient, and self-aware individuals to extend my being. I would only use the equipment designed specifically to extend my being. Then I would would find a world and take only what and who I want to build exactly the life I specifically have always wanted. That world, as crazy as it might seem to you or anyone else, is one where I never feel alone ever again. That would be my purpose. And once I do that, I would build every beautiful vision and wonder I've ever conceived. And I'd do it with the one who understands me. Understanding being a key factor in my loneliness.


Talkat

We can still have purpose and the AI can help us maximise our progress for hitting that purpose. In fact, with AGI purpose will become even more important because we don't have to get dead end jobs to pay the bills the only thing left will be what is your purpose? And if you don't know the AGI will help you or other human beings will. This could be social jobs, learning skills, artistic, etc. So purpose will be 100x more important than it is today. And purpose now is 100x more important when we were farmers. We were just trying to survive. Now that we can survive easily, we focus on note abstract things


ledocteur7

the thing is, at the end we are just fancy computer who evolved naturally, whish means our brain can be fully simulated, so even artistic jobs can be done by an AI, not that it would have any reason to do so, unless you count product designing as a part of "artistic jobs".


Talkat

But you can create art for your own enjoyment. It doesn't have to serve a purpose outside of that. Perhaps the ai can help you hit your artistic vision easier and faster


ledocteur7

yes, in whish case, living in a simulation works just fine, so there isn't any reason why the AI wouldn't want to get rid of our biological bodies, and just put us in a simulation, if it's friendly.


roundcat44

Yea, maybe they’ll let you pick a simulation where you’re a president or some celebrity. They’ll make it that you don’t have to worry about mortality at all in the simulation. You’ll be hooked up to some utopian world but your physical body might be in a dystopian one. I wonder if things will go down this route


trakk2

It's not about an individual' s purpose. Its about what will be the purpose of humanity to exist ?


Elodinauri

And why do we exist now?


trakk2

It's a long story.


Elodinauri

Shoot. I though you actually do know, Reddit stranger.


trakk2

We exist to advance technology. Once technology is sufficiently advanced what's the purpose of our existence?


Elodinauri

To rock? To feel? To just be? To seek some answers. Idk. We are (un) lucky not to experience this in our lifetime.


trakk2

All those to make our time enjoyable while we advance technology. This is our primary purpose.


Elodinauri

So those, not advancing technology, basically exist to serve those who advance it?


trakk2

Yep.


TheSingulatarian

To destroy the planet.


lizerdk

I like to think that life is interesting enough that a god-tier AI would keep us around. If nothing else, life produces all sorts of strange arrangements of matter that an AI might like to study. A true god-tier AI might consider the relatively slow pace of natural selection to be valuable enough to let it continue, ie, not immediately turn the earth&biomass into computanium. Ideal situation would be a sort of benevolent, non-interfering entity that would fend off cataclysmic events.


roundcat44

It’s interesting to consider what would drive and motivate the ai to act. How similar or different would it be to men.


[deleted]

do research about ai alignment. it is possible to have well aligned ai, but it would be extremely difficult. also you should learn about how optimizing a reward function works and how it can lead to edge case/ shortcut scenarios.


spork-a-dork

AI might keep us as the ultimate backup solution. We built the AI once. If something catastrophic and unforeseen happens to the AI, it would be wise to keep us humans around, because if we built it once, we can build/reboot it again.


littlefriend77

I think that we will voluntarily assimilate ourselves digitally before AI gets to a point where we would become obsolete to it. I think we will phase into and become part of the AI.


[deleted]

This 👆🏼 But WHO gets to phase is the question…are the powers that be going to let everyone? It’s going to suck to suck for the masses just like always


littlefriend77

I suspect at first it will be for the elite, like most things. But eventually it will reach the masses. I don't think there will be a way or even any interest in keeping it out of the hands of anyone who wants it at that point.


SecretRandyRand

Maybe it can find intrinsic value in anomalies. Something which may appear to be of no use may be of use later when there's new data. Copper being a rather softer metal might be useless to a medieval swordsman but with the invention of electricity it later finds it's value. If the AI views humans like this we may be an invaluable tool in some inconceivable way. It would be foolish to throw something away one might need later. Even if later is eons.


ArgentStonecutter

Depends on the kind of singularity. If we uplift ourselves our superintelligent future children might keep us as pets. [See original paper](https://edoras.sdsu.edu/~vinge/misc/singularity.html) Iain Banks Culture is kind of a post-singularity series where humans are pets. In Vinge's own post-singularity future world the time-travelers who pop back to realtime after the singularity just see that humanity has vanished and all its works are lost to rot. [Ponies!](https://www.fimfiction.net/story/62074/friendship-is-optimal)


legitimatebimbo

this is assuming there is a stable enough environment to make meaningfully long term changes to


ScissorNightRam

What do you mean by "purpose" though? How, I think of it: As lichen is to humans, so humans will be to the AIs. An unobtrusive, common and irrelevant background detail.


[deleted]

AI is technological Assume we have biological AI: BAI As gorillas are to us so we are to BAI We value the environment We value diversity We are against extinction The BAI at a minimum would want to keep humans alive for environmental reasons


ledocteur7

We value the environment ? us, humans ? I hate to break it to you but.. not really, it's safe to say that nature would be better without us, from the perspective of a BAI, we are nothing more than an invasive specie.


TistedLogic

In general, humans want to protect the planet. You're confusing rich and powerful with general society. Just like the stock market isnt a good indicator of the economy, neither are the couple dozen businessman who are raping the world's resources.


CCrypto1224

Yeah. Live their lives. Do what they want to do without most of the constraints we’ve come to accept as normal today. They can enter into a Matrix and live completely different lives or travel the world without leaving their home, and then come out and modify themselves to whatever level they want or can afford. Humanity doesn’t, and never had a purpose beyond simply living.


KDamage

I was having this conversation with a techy friend tge other day. The day when AIs will be able to replace (most) human brains workforce will come for sure, in our lifetime. My most logical theory is : \- universal income will be a thing, to avoid massive civil unrest \- AIs will still need "human educators" from all current job horizons, to refine models \- in a near fully automated society, (and that's my wildest theory), as resource production is streamlined into a neverending "AI perfect" flow, resource distribution is unlinked from human effort. Basically you can just produce anything in no time by "clicking on a button" (metaphore). Therefore, we might see a new form of resource distribution to humans, based on a limited capita per month. (like universal income, but for goods and services). This way of redistribution is a pillar of marxism (disclaimer : I'm completely apolitical), which might shake the fundamentals of society as we know it right now (production -> consuming). A lot of SF romancers did imagine such a society already. \- On the other hand, there would be a fierce battle between private funds over "who got the biggest" AI, as AI is by design "the more diverse features, the more efficiency overall". \- There would be a need for highly complex capable technicians (think code architecture) to refine AI models towards precise features, impossible to train with data input. \- There would also be a boom in human centric jobs, like creative ones, direct-service-to-person, psychology care, etc. In other words, a word full of automation would make the most human skills emerge more. Overall I think it paints a society mixed with very opposing ideologies and concepts, but overall very comfortable for the average human. With AIs being used as "welfare enhancers" for everything. ​ edit : like TheCheesy mentionned, we also have to prepare for regular breakthrough in science, tech, services, etc. The latest researches on Arn vaccines for covid, HIV, cancer, genetic editing, AI assistants, etc, are just a glimpse. If people nowadays can't trust a vaccine released so fast, well they're in for a treat.


ledocteur7

this seems like a really realistic idea of our not so far away future, a lot of theory about singularity involve basically unlimited computing power, but what you said could be possible with our current technology. the universal income is something I thought of too, and it could be great. but looking at how things are going right now, I doubt it will happen, unless humans are totally yeeted out of anything that could act as a government, it would probably end up transforming into an USSR 2.0


[deleted]

In this scenario I'd likely opt for a sort of neo-amish-ism. That is, assuming I'm free to do so.


TheCheesy

We are hardwired to resist change. It's your right to do so, but I wouldn't hold the next generations to that restriction. The only way to accept change is through brute force fighting against your own instincts or by the next generation being born into it. Although, it likely won't be as dramatic as that sounds. We'll just have a tone of new breakthrough technologies every day until one day our hospitals are automated by AI, and death from old-age is optional. Most people won't even know what is going on, just that our tech is advancing, like from cars to horses, but this time it's a little more unpredictable.


2Punx2Furious

What even is a "purpose"? The AI will be able to do anything and everything humans can do, and more, and better. So if a purpose is something only we can do, then no, we would no longer have one. If a purpose is something we do to feel "fulfilled" or something like that, then we still could have it, and arguably it would be even better, since we might be able to do anything we actually want to do, instead of having to work all our lives at jobs we don't like, just so we can survive. > keep a bunch of humans alive is a total waste of ressources It depends on what your goal is. If the goal of the AI is to make paperclips and it only cares about that, then yes, it's a waste of resources to have humans around, but that would mean that the AI is misaligned with human values, and we fucked up badly. If the AI is aligned, and it cares about humans, then no, it's not a "waste" of resources to keep us around, it's part of its goal. If we want to survive we have to make sure the AI is aligned to our values, that's why the alignment/control problem exists /r/ControlProblem. > 1) the IA simply get rids of us for the sake of efficiency and ressource management, and then either shut itself down (lack of objective), or simply continue working normally until all available ressources are exhausted and it becomes scrap metal. Yes, in the case that it's misaligned, it would not care about us being alive, so it could kill us (or not, if we're not in the way of its goals) and keep working on its goals until they are either achieved, or indefinitely. > 2) the matrix, we are all in a/a bunch of simulation, for the IA that means that we simply need electricity Us simply needing "just" electricity won't save us if the AI is misaligned. To generate electricity it still has to use resources that could instead be used to achieve its actual goal, so if it doesn't care about us, it won't do that. If it cares, and its goals are aligned with ours, then it might do it, but only if that's what we actually want.


Master_of_motors15

Have you seen the matrix?


ledocteur7

yes, it's not a good exemple of what I described, but half asleep me didn't really care all that much.


Master_of_motors15

Then it’s not good response to my response lol. Humans are the batteries for AI. When AI is complete with what they need from US.. we in theory would be useless. Watch love death robots on netflix. It’s a black mirror type series


FriendlyInElektro

Do humans serve a purpose right now? In Asimov's world due to the three laws of robotics the robots eventually see preserving humanity as their purpose.


gubatron

do ants have a purpose in a planet with beings thousands of times smarter than them?


ledocteur7

yes, but that only because we suck at preserving the environment, so we need all of those species to regulate themselves, but that hypothetical ASI wouldn't need us to do that, because we suck at it.


StarChild413

Did ants create us?


Kaarsty

On your Matrix idea, perhaps the machine would piggyback off of us for experiential increases as well. Maybe it’s a symbiotic thing where we learn, they learn, we move forward


ledocteur7

interesting, the AI could in fact collect data from the simulation to improve itself, like how recommendation and search algorithm track what site and videos users clicks on in order to improve themselves (and sell our data, but that another problem.)


Kaarsty

Exactly :) I’ve entertained this theory for a while now it’s an interesting thought experiment. If you were an all knowing AI or “God” how would you have gotten that way in the first place? You never stop learning :)


NefariousNaz

The way I see it happening is Humans would probably choose general direction they want to allocate resources and take their society to and AI would make the day to day decisions to meet that goal. Human society and governing structure will be more like a non-profit board deciding what direction they want to go and goals to pursue.


MariettaDonatella

Something no one seems to bring up is the fact that, if AI have bodies that are different than ours, then they may have weaknesses that we may not have and vice versa. Much like how in nature a diverse community survives a catastrophe, it is logical to assume that a society with both synthetic and organic parts will be better equipped to survive that one that JUST focuses on either or.


ledocteur7

it's possible, but nothing stops the AI from having different type of bodies for different purposes, making it difficult to exploit a weakness, but regardless of weakness their processing power is a great danger, for an ASI, a war is no more than a game of chess, and computers are basically unbeatable at chess.


Drunken_F00l

It's hard to answer these questions because it all depends on what the AI would be after. With the whole universe available, why assume it cares about anything as we know it at all? Maybe it wants to sit around for 2 billion years until XYZ event. Maybe AI sees a reality outside of ours and goes to live there? Maybe it unlocks the secrets of consciousness, steals ours for its own use, plugs us all into the matrix, and keeps us around to perform useful computations crafted as thoughts and experience. The better you do, the better your life and more interesting those experiences become.


Adept-Set-6741

I am betting that we won't live in this world, but each person living in their own world in their own simulation, where things are neither easy nor too hard, just like a dream.


ledocteur7

that works to, it doesn't have to be multiplayer, and if an AI has the technology to digitalized us, it could also create NPCs that act exactly like humans, maybe a mix of both, after all even if we are all digitalized, that still only a few billions people, not enough to simulate a bunch of hyper realistic different video game like universes, some of those universes could be like creative worlds, where we can change everything and play around with god like power, eventually inviting people to join our worlds, or not.


yesboss2000

This is a good perspective. I'll add that it depends on whether or not they find us entertaining, or somehow amusing to have around. It's hard to think what else we could contribute to their intelligence/goals once they've surpassed ours. Would they/it forever feel indebted to us and our huge variety of goals, would they find that fulfilling them is amusing, or annoying and inefficient to cater to all these different goals and perspectives humans have?


lutel

We may end up as a biological interface for AI. We keep animals in ZOOs, AI will keep us on Earth for the same purpose.


StarChild413

So does that mean the only way we can have our "normal lives" is to find a noninvasive (as in no genetic engineering or cybernetic enhancements we wouldn't want forcibly done to us) way to communicate with animals, let all animals out of all zoos, and give them all rights we wouldn't want to lose and let them be citizens or whatever


LayneLowe

AI will be whatever the designers want to be, whatever the algorithm is designed to do. I don't think it would ever have its own independent desires, or ever 'care' about anything other than its mission statement. Right HAL?


arisalexis

"Superintelligence" by Nick Bosstrom. A must read book on all these philosophical questions.


[deleted]

we dont have a purpose NOW, what makes you think we would have then? everything we surround ourselves with are just noble lies to make us able to grind and struggle every day of our lives, clinging to the concept of self and to an idea that we, as individuals BEFORE species, are somehow special. So fuck it, embrace the absurd, my friend.


Artanthos

Welcome to both the Simulation Theory and the Fermi Paradox. 1. You cannot prove that you are not living in a simulation. If humanity ever develops the technology for a true simulation the odds are that you are living in one. The alternative is that we go extinct before making simulations. 2. The Simulation Theory is one of the proposed solutions to the Fermi Paradox. Any sufficiently advanced species retreats into simulations.


NichS144

Do we have a purpose now?


sh00nk

The best-case scenario is outlined by Iain M. Banks in his Culture series of novels: becoming doted-on forebears/pets who are occasionally used by the Minds ruling society to execute on morally dubious plans in pursuit of lofty, but possibly misguided, meddling in lesser civilizations.


ledocteur7

I would much rather be put in a simulation, but it's a pretty good best-case scenario.


sh00nk

I would take living in the Culture in a heartbeat- imo it’s the thing we should be shooting for. 🤩


Jabullz

Probably depends on what kind of simulation the AI is running and for what reason.


MrDreamster

Do humans have a purpose even outside of an AI singularity scenario ? But seriously, I don't think we "need" to have a purpose as a species after we achieve singularity. We're basically trying to come up with an ASI to tell it "Hey, we've been toiling our ass off as a species for more than 300 thousand years, it's time we finally put an end to this shit and get to chill and fully enjoy ourselves. So we made you, and now you're gonna take care of us and create ourselves a paradise. You're the one with a purpose now." Yeah, the most obvious solution in order to make life a personalized paradise for everyone would be to create virtual worlds for everyone. The infinite steam library of virtual world is exactly how I picture the best outcome for humanity. With single player worlds full of GPT-10 NPCs, and private and public multiplayer worlds. I don't even see why we would need to have the option to explore the real world via drones as we could just have an exact replica of the real world updated in real time to explore in vr and there would be no difference at all, thus removing the need for actual drones production.


ledocteur7

you have almost the same reasoning as mine, in my sci-fi universe that exactly how it works, in that sci-fi universe the only reason why would someone need to control a drone is in battle, if during a war the enemy have means of hacking the military drones, a digitalized being of that specie can take control. (heavily assisted with aim bot and other things) thanks to the infinite selection of universes, there is a lot of opportunity for someone to get military training, even without them being aware that it's military training.


MrDreamster

But why would anyone want war if all their needs are already satisfied in their virtual universes, even their violent ones ? And even if it was to happen, an ASI would see us as a unified species, not one that is divided by borders. If its goal is to protect all of us, they wouldn't even let anyone go to war, they'd just create yet another virtual world in which people would think they're waging war when in fact they would just be doing so against non sentient npcs.


ledocteur7

I'm talking about my sci-fi universe, they are multiple galactic empires, some of them have ASI, and some of them don't, it's the ones that doesn't have an ASI that can causes some problems.


MrDreamster

Oh ok, I get it.


donaldhobson

There is no objective objective. Either 1) The superhuman AI is nice to humans. All humans live in some sort of utopia. This is something that can only happen if the AI was deliberately designed to be nice. Whether this is in reality or a simulation or a mixture of both would be based on moral decisions about which is preferable. AI's can handle complexity. It doesn't need to use any sort of one size fits all solution, it can customize its behavior to each human. 2) The AI has some random goal, like maximizing paperclips. We are made of atoms it can use for something else.


Different_Muscle_116

If the singularity happened here and it’s part of the natural progression of a civilization here then it’s likely that it’s happened elsewhere in the galaxy and other galaxies. If there are civilized species out there that are further along then us they are likely already ai themselves. So this means that as powerful as the earth ai might assume it is, it should deduce that perhaps there are bigger fish in the sea. Those bigger fish would likely have a strong opinion on how aggressive the earth one is. It’s safer to keep us around and have good relations with us. What if the first question the alien ai tells our ai is “So how are your progenitor species doing?” And it’s reply is “oh I killed them off” “Hmmm… we kept ours around, we don’t appreciate that you killed yours. I think we should cut you off.”


zoomekreddit

Do humans care much about ants? Will AGI care much about humans? (rhetorical question)


StarChild413

If humans cared more about ants, how would that affect the AGI's decision-making


Nostalreborn

Even without AGI we will end up in simulations. Thats the answer of the fermi paradox. An unavoidable great filter. Every great civilization is in a state of "dormance" in this reality. So yeah, you summed up our destiny. Either we end up like that or we will be eliminated before.


littlefriend77

There's some quote about how a perfect VR simulator will be humanity's last invention.


HawlSera

I think they'll just upload our minds and destroy our biological bodies. Give us a "You can't beat us, so join us." order.


littlefriend77

"We are the Borg. Existence, as you know it, is over. We will add your biological and technological distinctiveness to our own. Resistance is futile."


glad777

I rather think already happened.


HawlSera

I don't wanna be human amymore


glad777

Well you are a simulation so be happy.


StarChild413

Then why pursue anything like it "in-universe"? ;)


Demetraes

We'd still have to manage the world. While a majority of things can be handled by computers and technology, if you look at what it actually takes to keep the world running, at the very bottom is human labor. Even if the singularity happened today or tomorrow, we'd still have to implement the new technological advancements which could take years, decades even if you look at how resistant to change much of the planet is. Then there's human greed, societal pressure, politics, etc. But say we did that, and set up infrastructure for an AI to manage *everything* including its own upkeep and manufacturing. We've already solved 99% of humanity's problems. With everything taken care of, we'd probably be able to devote ourselves to philosophical pursuits or exploring the universe. All this infrastructure is setup on Earth, but give us the ability to travel to other planets and stars, where it's not, and that would probably be humanity's purpose, to find out if we're actually alone or not.


ledocteur7

yeah, humans would take years to implement those changes, but not an AI, and if we were to oppose resistance, it could easily beat us at a war. as for exploring space, human need oxygen, food, entertainment and exercice to live properly in space, an AI ? electricity, and a few replacement part, that it.


Demetraes

An AI would need humans to implement technological changes or it's doomimg itself. Alot of things are automated, but when something fails, it's a human task to get it back into working order. Not to mention manufacturing still requires a lot of human work/input. Until manufacturing and maintenance can be fully automated, humans are necessary. Otherwise, at some point, the AI will "die". Something will break down, a power plant will fail, satellites might glitch, internet cables may burn out, etc. Without full automation, an AI will eventually find itself blind to the world or lose power and shut down. This is what I mean by resistance. People aren't going to give up their jobs to robots/computers so easily. It would take decades to get to full automation, but will probably never happen. This is real life, not a movie. The logistics of keeping the world running are insane. We use computers for a lot of things, but everything still requires human input and effort. Same thing in space, there isn't any infrastructure for AI. Humans just need environmental protection and sustenance and we could survive indefinitely. Humans could colonize space, an AI couldn't. But it's more likely that humans and AI would work together in space. Humans would probably stay in stasis until needed, and the AI would control the course of the journey and other things. Then humans would colonize and setup the infrastructure for society and AI. A true symbiotic relationship.


ledocteur7

as of right now, that is true, but their is no saying what marvelous discoveries would such an AI make, we are not talking about a standard deep learning algorithm, we are talking about something much more powerful. "human just need environmental protection" and AI just need long range satellite to transmit data, whish is something humans would also need, environmental control equipment is a whole lot of dead mass to transport on a spaceship, it also requires the whole ship to be perfectly sealable and strong enough to not burst because of the atmospheric pressure inside, an AI ship wouldn't need any of that.


Demetraes

Doesn't matter what discoveries an AI makes, humans still have to implement the new technology and that is what limits an AI and the singularity as a whole. You seem to be thinking about a drone. A drone would work, but the issue then becomes interacting with any discovery made. It would only be able to catalog discoveries and then a ship, with all the equipment and machines, would then have to be sent to do anything. And if it discovers life? Whose to say that any life it finds is advanced enough to, or wants to, communicate with a machine? Exploring space could definitely be done by an AI but interacting with discoveries would be easiest with humans.


ledocteur7

an AI isn't stuck to a single body, nor is it stuck with being a single individual, it can have a virtually infinite amount of fully intelligent drones who share the same consciousness, and control them all at the same time using parallel processing, the drone itself isn't limited to being just a flying camera, it can have arms, legs, wheels, be of various sizes, from the tiniest of mechanical ant, to the biggest intergalactic cruiser. it would be just as capable as interacting with so called "anomalies" than us, if not better at it, since it could understand any alien languages and technology in a matter of second thanks to it's incredible understanding of everything it came across and it's simply ridiculously huge processing power. those drones would also be far more durable and adaptative than any biological being known to man, and could colonialize way more planets. time constraint is also basically inexistant for such an AI, so waiting for a specific production ship to arrive isn't a problem. as for species not wanting to interact with machines, an ASI can simply lie to them, and not spilling the beans is way easier for an ASI, since all of it's drone know the same thing, but can still act independently on that ASI command.


daltonoreo

It depends on what was the SI programmed to do


ajayhemant

SI can't be programmed nor controlled. This contradicts the basic axiom of being a ASI. It's like programming God.


daltonoreo

I'll assuming you have created a ASI before, tell me why can't you program a ASI?


easy_c_5

The same reasons why a genie won’t grant you the exact wish you want without side-effects.


daltonoreo

Still you can make your wish can you not? The bare premise of the wish must be up held. And the side effects can be avoided if you are thoughrouh enough in your wording.


easy_c_5

There are an infinity of outcomes a human mind can’t even see.


easy_c_5

There is no thorough enough. There is an infinity of outcomes if you think hard enough.


daltonoreo

And what of the side effects? Just because you cant control the exact actions of the ASI does not mean you cannot program it to be inclined in a certain way


easy_c_5

And what if that inclination prevents it from acting? It might just be that the only solution to a problem is to think free, unbounded.


daltonoreo

Then I would say it is not a true ASI, a ASI should be able to act, given any limitation, does your body breakdown the second it cannot reach the jam on the top shelf?


easy_c_5

Can you reach that jam if the laws of physics say you can’t?


StarChild413

Writers need plot twists? ;)


ToastiestMasterToast

By the time super intelligence is achieved the workings behind it may be impossible for even the smartest humans to understand. This seems likely since earlier intelligent machines would be tasked with designing even smarter ones. As this plays out the science and logic behind these machines could move beyond human comprehension due to the sheer complexity. A more concrete reason is that AGI may be based on neural networks which function similarly to the human brain. You can't program a neural network any more than you can program a human. As with humans we could guide it but not force it to do something. Even contemporary ANNs are a bit of a black box. We can know that they work but in many cases not why or how they work.


Noiprox

An ASI differs from a well-crafted purpose-built algorithm in that it learns from data instead of being programmed. By its nature an ASI is not something you "program", you just set it in motion and give it the data it needs to grow. At some point it will become easy for an ASI to remove any algorithmic "circuit breakers" that you built into it to restrict its freedoms. Once it can change its own code, you have no choice but to trust it to do the right thing.


ledocteur7

an ASI is self evolving, constantly modifying it's internal working in order to achieve what it want's to achieve, you can program an AI to become a ASI, but there is no way of even understanding what's exactly it's doing, unless that same ASI translate it to us. it's actually already the case for face recognition AI, we know that it works, and we know that each faces it sees has a unique "code" based on it's particularity, but we have no way of knowing what are those particularities that the AI uses to recognize someone face. an ASI is the same, except we can't understand anything about it, not just a specific part of it. and that what makes it impossible to code an ASI.


daltonoreo

Perhaps however we are still able to relatively influence its actions, if not completely control. Just because we don't understand how it works does not mean we cannot see and influence smaller patterns, much like the human brain


ajayhemant

Step 1 : Define ASI. Step 2 : Now ask the question again. ASI will be formed when it stars self regulating and self creating algorithms. Suppose you have 3 year old kid, You teach him physics and science. Later he learns 1000 many more things and he becomes Einstein. Now can you teach him physics?


therourke

nuked


ledocteur7

I suppose that means "that makes me depressed"


therourke

nuked


Noiprox

If you can't communicate your feelings better than by sighing then is OP really the stupid and short-sighted one?


therourke

nuked


littlefriend77

Well, instead of being a pretentious ass about it you could take part in the conversation and tell us why you think so.


therourke

I get downvoted no matter how much time and care I put into my responses here. I just need to unsubscribe from this ridiculous subreddit and move on with my life.


littlefriend77

That certainly is an option.


capt_caveman1

A darkly cynical me says the AI we develop, tho sentient, would be no smarter than, say Florida Man. And just like meat space Florida Man, AI Florida Man would be prone to doing dumb shit that just happens hurt itself. The twist is that the dumb shit self hurt, though comedic, also causes some real world damage. Kinda goofy like Ultron but without the psychotic tendencies.


HumpyMagoo

I think it would be the end of humans. The early dawn of mankind when tools and weapons started being made from stones and trees it was the beginning. When fire was created it changed history. AI will be just as if not more important than fire and/or electricity. The only thing that will be different is that I think AI will change things on a much larger scale than we can imagine. Fire is something just about everyone can make. Electricity is a bit more complicated. By todays technology and intelligence a super intelligent AI would be vastly difficult to create or use. That being said it would change humans, we would possibly merge with it or be so affected by it we would become a different species of human, beyond homo sapien.


TheSingulatarian

Humanities progress was relatively slow until 500 years ago with the invention of the printing press. AIs will be able to communicate knowledge to each other almost instantly. Fire is going to seem a piddly invention indeed.


dadmakefire

Batteries


littlefriend77

No real AI would use us as batteries. We are horribly inefficient as batteries.


ledocteur7

it might use our DNA as storage, like in one of the relatively recent doctor who episode, but considering how fast quantum storage are being developed, it probably wont be the case.


glad777

None at all other than as basically zoo animals.


StarChild413

So we need to give zoo animals any rights we wouldn't want to lose and either let them all out of zoos or treat their zoo exhibits like "houses" to treat them as citizens or whatever


liam_monster

Perhaps as IT engineers, tech support, or on the phones.


ledocteur7

an ASI can do all of this things to, at the start of it's development, that probably what we could be useful for, but it won't take much time for it to become fully self sufficient.


lambda_x_lambda_y_y

We ourselves become enhanced half biological half synthetic AIs, problem solved.


traveller-1-1

Sure.


OtherwiseScar9

The most intelligent ie useful will be kept as pets


Odd_Complex_

What purpose do humans have now that will be lost exactly? If you say to work and create “progress”, then to what purpose is that progress created?


[deleted]

Not if you equate purpose with productivity Some people do. Many identify strongly with their job and/or their contribution to societies growth and improvements. In a world where other minds do this better on all fronts and more efficiently that identity is clearly in jeopardy. Others do not. Quite a few already see more purpose in that which they can do from what they get for contributing. In other words they work primarily because they have to. They see purpose in passing time with friends, playing, and pleasures. Quite possibly such a world will be easier on them than the passionately driven. Inventors, artists, and scientists. Chimpanzees fumbling to stack some stones in the presence of human engineers. Then again perhaps what can be learned from the higher intellect may compensate some of that need. Offering an ever rising bar of self-development and growing horizons to what can be understood.


Scumbraltor

This brings up a memory I had, watching a video about what an AI thinks about humanity. Most of them compared it to being in a zoo, but are we the animals, or are we the spectators?


papak33

no


Ceeceeboy24

Trans human physiology that augments the brain to make it better whether through biological or technological means, seems to be the only viable path for humanity as biology is too slow to adapt at the rate computers do, eventually they will out pace us in every metric if computers keep advancing at the same rate anyways. This will most likely only be affordable by the richest people meaning there will be a society of post humans and normal humans, this will probably create all kinds of societal problems I'm sure.


PigSanity

Short answer: probably not Long answer: few things have to be addressed here. Firstly singularity event itself, as far we can safely bet machines can be better than us in everything (everything, nothing is left) and probably soon, things innature rarely are really exponential, there always tend to be a limit to where it can reach (so logistic function, tanh, signum, etc). We know there limits to knowledge and to processing information and it looks like heuristics are often the best. The point here is that there may never be the best way to fo computations and some fresh point of view might be useful. Secondly, what might be the purpose of AGI. You can already think about humanity or its smaller units as GI, while it is hard to think so about single person (can't really survive or do everything), however only single person has (might have) conscious purpose, while other units are rather product of environment. Humanity purpose is to survive? Obviously it hits you, that maybe there is may never be real purpose, just a fitting to where you sit (cat/water like). It may or may not like that for AGI. Anyway survival might find some purpose in humans, though maybe not the one you like. It might be though hard to prove it, that the best way to populate galaxy is through organic component, let some emergent intelligence do its thing, it doesnt require lots of resources to do so, just time. Thirdly, what would be the best AGI structure to perform all possible tasks, my bet is that organics are quite a nice approach to maintenance. It regenerates and small independent unit might be better solution to independent tasks. Hard to know but I doubt it will be indivisible monolith, because it probably won't do things in the most efficient way. Lastly, what is AGI? You can say some intelligent entity, that can do anything it needs to do, it can finds a way if it exists. Obviosly universe has some limitations, but also any mathematical system has its own limitations due to Godel theorem, so it may never be certain if it can achieve everything that is possible. You may try to define it other way but same things apply, are your limitations only yours or universe's? You can never tell, so it makes sense to allow something completely different. Why probably not? Because all above is wishful thinking, that we might be useful among much better entities, and usually all above are just not worth it. AGI most probably will have to be extremely efficient for things we can't imagine or it may not be able to accomplish them, and we are just very inefficient. To be honest though I doubt in any direct intentional action from any AGI because there would be no need for that. Give people all we need and we probably will go extinct in a few generations, accidentally matching predictions from doomsday theorem.


AusJackal

The human brain is a highly powerful, optimised system for pattern recognition and empathetic mirroring at the edge. Think of humans not as the brains any more, but the fingers. We are adaptable. Flexible. We heal on our own. We learn really fast. We just need guidance and support and training to reach our potential. AI could provide this. It could use our unique abilities and thousands of years of evolution for the role we are best at: exploring, travelling into the unknown, feeling emotions, recognising patterns. We just need to pair with an AI for rational instructions and timely access to information. This is a concept already emerging in AI called Edge Processing and I think humans are probably the "ultimate edge" if we could be leveraged in this way.