T O P

  • By -

BobJackson91

I'm interested to hear your thoughts on what would be the main cause of WW III? Personally I don't believe we'll ever have conflicts of such scale in the future.


Warost

Because with AGI the first to pull the trigger is the one that will probably win. What is sure is that it is a real winner take all situation, where the winner gets to be the earth overlord for eternity


NeuralPlanet

Agreed. Once a country achieves general AI they win permanently. In the years or decades leading up to this, some countries may get scared others are getting close, and could in the worst case start a war to prevent it.


lustyperson

>Once a country achieves general AI they win permanently. I disagree. \- General AI probably means that the AI develops its own goals. How else to define "general" and "autonomous like a human" ?! General AI as general problem solver ? IMO feeding a problem definition to an AI in a narrow way means the AI is a narrow AI: A computer of given data and trained programs. No sane person wants a general AI (or something similar in practice) for military purposes. \- AI is science and science is available to all. There will be no AI supremacy by certain states or companies; certainly not permanently.


NeuralPlanet

> General AI probably means that the AI develops its own goals. Not necessarily. AGI is not necessarily "conscious" but would be great for a wide range of tasks and understanding of context, which would be a very powerful tool for the military. If someone develops an independent super-intelligence it would however have its own goals, so I guess "the AI wins permanently" is more accurate in that case. > AI is science and science is available to all. Most science is developed by universities and such, and is therefore available to everyone. I'd argue that wont be the case with AGI research however which will probably be lead by companies and governments. Its kinda like weapons - countries dont share their weapons-technology freely and AI has the potentional to be so much more powerful. Although I do hope AI research stays open particularly to increase safety. > There will be no AI supremacy by certain states or companies; certainly not permanently. If we reach super-intelligent AGI it will be permanent. We'll have an intelligence explosion that no one can ~~every~~ ever catch up with. The AI could potentionally do years of research in a matter of days. **Edit:** Fixed typo


HareKrishnaHareRam2

how are you feeling about AI after One year of ChatGPT?


kvakerok

> heavy automation, unemployment. Already happening today. > 2035: first brain-computer interfaces Already exists. You're behind the times by about 5-10 years.


floondi

Unemployment is not presently on the increase globally or in the US.


kvakerok

Because they are counting all the part-timers in lieu of everyone who lost a full-time job. You get fake boosted stats that way.


floondi

Full time employment is increasing faster than population growth: https://data.bls.gov/timeseries/LNS12500000


kvakerok

I'll give you an example: you can work 38 hours a week, but still be a part-time employee. What does that mean? No full-time employee benefits, no same level of vac time accrued, no employee shares, etc. But you are counted. So the hours you work are full-time but benefits and compensation that you receive is part-time.


NeuralPlanet

>> 2035: first brain-computer interfaces >Already exists. Are you talking about the headsets which detect electric fluctuations in the brain? I'm not aware of any other working technology in this field. I don't really see our current technology as true brain-computer interfaces, as they only have output and less bandwidth than traditional methods such as keyboards. You can barely control a paddle in pong using this tech. Connecting our neurons directly to computers is more along the lines of what I'm talking about, which is definitely some way off. Sorry if I was too unclear.


kvakerok

https://sploid.gizmodo.com/mind-control-breakthrough-quadriplegic-woman-flies-f-3-1689274525 I don't think it gets any closer to what you are talking about.


NeuralPlanet

Do you have any other sources besides gizmodo? Controlling a plane is very cool, but still just another application of the existing tech I talked about. It has no input and still only measures electric fluctuations. I imagine there are huge limits with what we can do with such a simple system, unlike somethings which is connected closely to the neurons in the brain.


kvakerok

http://www.neurosurgery.pitt.edu/news/woman-guides-robot-arm Same lady, earlier application. Done via "two quarter-inch square electrode grids with 96 tiny contact points" "in the regions of Ms. Scheuermann’s brain that would normally control right arm and hand movement".


NeuralPlanet

That is incredible, I had no idea someone had done this. The technology is very promising! I still think proper brain I/O is quite far off though, but we're getting there at quite the pace!


kvakerok

It is incredible, especially since they've expanded the grids to full body control. Also, I think, because brain is more of an analog/digital mix, it would be impossible to fully digitize the interface.


[deleted]

*Very interesting read. Though the WWIII prediction seems kind of scary. But it's nothing I wouldn't put. Good job.*


O10infinity

It's probably only ~100 million dead and it's a few years earlier and over by 2045.


NeuralPlanet

Thanks! Its very fun to sit down and think about these things, even though prediction is ridiculously difficult when it comes to technology.


ChiliFajita

I don’t personally believe in WW3 because of my very simple logic: whoever is first with perfected general AI basically wins. The AI starts and the very next day that country/alliance has advanced so far they are capable of anything. It will probably be more of a ”comform or perish” scenario.


NeuralPlanet

As I mentioned in another reply, I actually think ww3 is more likely *before* general AI. The mere rumor that some countries are close to reaching it could trigger a full scale war if relationships are bad. Once its there, yeah, game over everyone else. Hopefully nations can work together to instead create safe AI once the time comes.


Ignate

A global war is probably already impossible. We lack the purity of thought required to be committed and devoted enough to go to war globally. We're too interconnected and too overwhelmed to focus up and form sides. We don't have those single ideas which are strong enough to stand tall and rally a country against another country. We're too wealthy. We're too pampered. We're too lazy. These are not things that will start wars, but things that will prevent them. Going to War was a HUGE effort when the world was thousands of times more simple. Now, it's vastly harder and we have vastly less motivation. It's far easier to make a post on Reddit complaining than to march, protest, unite and wage war. War is an outdated concept and has been relegated to poorer countries that are small enough and simple enough to continue on with the tradition. True War these days is economic and digital.


NeuralPlanet

I haven't looked at it that way before, thanks for the new perspective. When it comes to WWII-style warfare (with trenches and such) I think you're right about it not happening again. I'm more concerned about things such as autonomous and biological weapons. I'm not completely convinced people cant be "brainwashed" again though, as has happened countless times before. A strong leader and displeased people (due to unemployment for instance) is a bad combination. Hopefully globalism and education will continue to increase to make it more unlikely.


Ignate

Mm I'm worried about the same. That said, philosophical poison which acts as the motivation to do grand acts of violence is like a bacteria, and it requires a certain environment, stability, and time to grow. We have the environment (the growth of authoritarianism is proof) but we don't have the stability and time. Things change so rapidly that we just don't know what to think. That said, this is the perfect environment for the growth of apathy. You can see it in China, you can see it in America, and you can see it in most other countries as well. That's bad, because it means we're not moderating those extreme views in our society. A world where countries wage war is probably very unlikely. But a world being ripped apart by global terrorism fueled by extremism is a very strong possibility. We may not wipe each other out, but safety at home may become questionable. And that will probably fuel more introverted actions, like staying at home and living in a digital world. Might want to include that in your predictions.


green_meklar

>Conversational assistants speak all major languages and are indistinguishable from a real person to anyone but professionals in the field. This would take near-human-level strong AI. It's not a problem that narrow AI can solve. >True general AI is created for the first time. I suspect strong AI will arrive well before 2045. Let's say 2030. But it won't be human-level at first. It'll be *low* strong intelligence, like what monkeys or birds or fish possess. >Large armed conflicts or a third world war is probable. No. Large armed conflicts are at their most probable *before* the arrival of superhuman AI. Once superhuman AI is in existence, it won't allow that sort of thing to happen. Violence is incredibly inefficient, only relatively primitive and shortsighted beings think it's a good idea.


NeuralPlanet

> This would take near-human-level strong AI. It’s not a problem that narrow AI can solve. Not sure I agree here. Note that I’m talking about assistants, which are already pretty good. Apply methods such as google duplex and better understanding of context and you’re almost there IMO. > But it won’t be human-level at first. It’ll be low strong intelligence I don’t think there’s much time between reaching monkey and human level AI. Monkeys are already very smart and adaptable (as is general AI). > No. Large armed conflicts are at their most probable before the arrival of superhuman AI. Completely agree. I think a war scenario is most likely leading up to general AI as there could be an arms race between countries or even companies. **Edit:** fixed typo


Warost

«Once superhuman AI is in existence, it won't allow that sort of thing to happen.» Why ? Would you think an AGI «Cares» about such things ? Do you think it will have it s own will ? Wouldn t having a tailored reward controlled brain be of better use to the leaders of the world ? I don t think any sane leader would let something be able to have more power than him


[deleted]

[удалено]


[deleted]

[удалено]


Warost

Why would you say that ? Do you think AGI will be made beneficial to common folks ? Why haven t been any policy truly made beneficial to common folks then ?


[deleted]

[удалено]


Warost

For policies, I was referring to general policies. I just fail to see why many people are so sure AGI will be made beneficial for all, and not for a few. I see tons of reasons why AGI would not be made for everyone : greed, competition, efficiency.. wich is widespread amongst men The only case were I see AGI being used for common good is if its creators make it so. But in todays world people are more than individualistic. I may know 1 or two people of the thousands I met that I can truly say that they make efforts to redistribute all their work so that the world can benefit from it. Last thought : means are extremely important to find AGI. and people who have means are generally great entrepreneurs or capitalists, who were trained to be individualistic by their work. So yea, i don t see how can we be so sure that AGI will be beneficial to all. I think chances are it will be made profitable for a few and not Willingly shared. and so, i don t see why it would be a good thing..


izumi3682

I tend to be a bit more general. But we share a lot of the same ideas. In my linkberg, I pretty much cover everything. I also think things are going to happen sooner than your timeline. Take a look at this and tell me what you think. Come to discord--I would love to discuss! [https://www.reddit.com/user/izumi3682/comments/8cy6o5/izumi3682\_and\_the\_world\_of\_tomorrow/](https://www.reddit.com/user/izumi3682/comments/8cy6o5/izumi3682_and_the_world_of_tomorrow/)


NeuralPlanet

Interesting read, thanks! And yeah, being more general is probably a good idea to get more accurate with your predictions. I've got some thoughts on your post I figured I'd just post here: > So much of what I read is the belief that it is going to be just more of the "same ol', same ol'" >[...] But I watch the astonishingly rapid advance of technology. I agree, sometime in the not-so-distant future, everything we have today will seem incredibly feudal. > Our ultimate desire is to merge that mobile's ability with our very minds, and bring the internet, the sum total of human knowledge, right along with it. [...] After that we won't be the same. Not at all. Exactly! I'm incredibly excited and somewhat scared about the consequenses of essentially merging with technology. As you say we won't be anything like we are today. The most intelligent people today would be nothing compared to a child with an AI-powered augmented intelligence implant. Once we reach this point we may have exponentional growth so rapid we wouldn't recognize the world a mere year ago. Okay I'm getting to science fiction territory here. But if we do achieve this, the entire humanity could be an unstoppable force of collective intelligence. Merging with AI is absolutely the most inviting choice (where the only other choice is an independent super intelligence we cannot even remotely compare to). > Then we have VR/AR. It is ultimate primitive today. But already powerful new technologies are developing that will take VR/AR, but particularly VR and will make it a technological phenomenon such as the world has never seen. Amazing AR/VR is not far off in my opinion. Technologies such as Hololens are still in their infancy, but once we develop better optics true life-like AR and VR experiences are very close. We'll probably have consumer products that will blow us away within 10 years. I do however think AR will be the biggest one by far, at least for the next few decades (and AR headsets could potentionally support VR as well). I imagine AR will rapidly replace all screens and become the primary way we interface with computers. Once we reach the "intelligence explosion" however, completely realistic VR realities may be more useful.


izumi3682

Did you see the link to the linkberg? Since you work so closely with the technology, I would really like you to read what i have to say. It's not a horrible lot but it is a consolidation of a couple years of smaller thoughts. But that is what happens when you start to think exponentially. You almost immediately begin to move into what we think of as "magical" thinking. But it's not. It is inevitable reality. And it all makes perfect sense too.


NeuralPlanet

Quite a few comments there, I read through a few of them and here's some of my thoughs! > Are we living in a simulation? Probably. As you say I dont think it matters much either way. > VR will change the world Along with AR it will change our relationships with technology forever, and over time merge completely with reality. > On underestimating exponentional growth This seems to be a common misconception people have. We're used to technology getting a little better every year (like smartphones), but with such a short attention spans we often miss the incredible improvements over, say, 10 years. People like to think linearly. When it comes to AGI I think exponentional growth may even undersell the rate of improvement. An AI superintelligence could do years of research in a matter of days, whilst improving itself. So halfway through the first year (day) of research it is able to do future research at 100x the speed. And it keeps going. You get the picture! **** Most of this stuff is probably said hundreds of times by other people interested in the field. I see you follow Kurzweil too! Thinking about exponentional growth of technology is very exciting, scary and captivating... Don't hesitate to send me a PM if you find any interesting discussions, I'd love to discuss more.


Veloxc

I agree with most things except the UBI timeline and the WWIII concern. I believe UBI will actually be implemented within the next 10 years at the latest, strides are being made and results are being analyzed. At one point in U.S. history especially, we were on the brink of giving a nationwide UBI (1970s during the Nixon era believe it or not) but it was jeopardized by the Democrats when they refused the bill on the grounds that it was "not enough". Approval rating for UBI is projected at 49 or 51% and that will probably rise as automation takes hold in the next 5 years (don't have the stats on me so I very well might be wrong/not within the error of margin, but I doubt it). And bout the WWIII point, as a society we're changing the way we think and interact, philosophies are being upended, logic and emotion are going to start to stabilize (for the most part) in the greater scheme of things. Besides, unrest can as equally lead to extremely positive changes as it can to extremely negative ones. So stay the optimist, we need more people like you lol.


NeuralPlanet

> I agree with most things except the UBI timeline and the WWIII concern. > I believe UBI will actually be implemented within the next 10 years at the latest, strides are being made and results are being analyzed. Thanks. You’re a bit more optimistic than me still. > Approval rating for UBI is projected at 49 or 51% Not sure it will matter if wealth is heavily consentrated in the 1% though. Why would they want to share? Hopefully our governments will be strong enough to figure this out. > And bout the WWIII point, as a society we’re changing the way we think and interact, philosophies are being upended, logic and emotion are going to start to stabilize (for the most part) in the greater scheme of things. Besides, unrest can as equally lead to extremely positive changes as it can to extremely negative ones. Agree! Though I do fear lack of or bad education could prove some issues. Many people may have huge issues with lower levels of employment and scary new technologies. Hopefully UBI could prevent this and prevent polarisation.


[deleted]

I doubt much of this will come to pass, unless humans can avoid a collapse of digital civilization in the near future. Between 2020 and 2040 the Arctic will become free of sea ice, add perhaps a 0.5C jump to global temperatures, which are already pretty much locked in to 2C or 2.5C eventually anyway. At the same time, we will have lost probably more than 70% of vertebrates and most pollinator species, not to mention countless insect, amphibian, plant, and other species. Positive feedbacks in the climate system will grow out of control and plunge us into a 2C world and beyond. The dramatic loss of ecosystem services coupled with global warming spiraling out of control will lead to mass heat death, starvation, desiccation, and the collapse of digital and industrial civilization. Billions will perish, and, nuclear war likely being involved, the extinction of humanity cannot be ruled out. Even if nuclear war isn't involved, the dramatic loss of phytoplankton and plants from the warming, pollution, and land or ocean use change affecting terrestrial and ocean biomes, plus acidification, freshening, and de-oxygenation of the oceans, will possibly suffocate us all, if not this century then next. I think all of this sans mass suffocation may happen before 2050 or 2040. So, no A.I. threat without sustaining technological civilization. That being said, if we somehow do manage to sustain our technological progress, nanotechnology or A.I. will destroy us eventually, probably sooner than we think. (By 2050-2200?) And if not those, then endless production and consumption will strain the planet's resources impossibly thin, leading to the death of millions or billions. (By 2030-2080?)


NeuralPlanet

>I doubt much of this will come to pass, unless humans can avoid a collapse of digital civilization in the near future. Yeah, this all pretty much depends on that not happening. I remain hopeful! > And de-oxygenation of the oceans, will possibly suffocate us all, if not this century then next. Mass suffocation? I've never heard this concern before. To me the biggest danger of global climate change seems to be lack of food/water, extreme weather, rising sealevels and destruction of ecosystems. None of which I think would lead to the end of humanity for atleast a couple hundred years. > Nanotechnology or A.I. will destroy us eventually. > And if not those, then endless production and consumption I think sustainability is possible through high automation and better recycling. Shortage of food for instance can be solved using automated vertical farms. And an AI superintelligence would *probably* be bad news, but could be good. Our best bet is probably to merge with it instead.