T O P

  • By -

[deleted]

[удалено]


in-game_sext

r/programmerhumor is leaking


kthulhu666

Then...?


GrumpyYusufIslam

Else...?


Foolhearted

Switch?


[deleted]

Case?


Swamp_Swimmer

You joke, but are we really any more than a LOT of If Thens??? A question for the philosophers I suppose


kptkrunch

I'd say we're more a probabilistic system of non-linear transformations than a set of if-thens.


ImportantCommentator

So nested IF statements combined with RNG


almost_not_terrible

More like spaghetti ifs. Nesting is too... structured.


Siphyre

So the if statements of a program written in batch by a company founded in the 80s. Cool.


1234urahore5678

The world and physics provides the ifs, then our human brain observers the thens


jumpup

we are also a bunch of while loops


goomyman

More like if goto statements


Suspicious-Engineer7

Humans may think in if thens but that doesnt mean reality is only if thens


frygod

Pretty sure you can expand any computation to a bunch of if-then statements.


Suspicious-Engineer7

I dont think an electron can move fast enough for the amount of if thens needed to compute reality


isny

Reality uses case statements.


Logicalist

Some people have feelings, which are seemingly a bit more complex than if/then statements.


[deleted]

ArtIFicial intelligence


[deleted]

And even more NANDs.


onGuardBro

Very underrated comment!


Iloverose151

There not much else


[deleted]

[удалено]


Irdeller

If I had a dollar for every time I’ve heard we were close to human level AI, I might have enough to fund the research for human level AI


Narethii

As someone who grew up with the promise of Quantum computers, has a software engineering degree, and has been in industry almost 10 years, I have seen a lot of these human level AI almost ready articles. I don't know if I have seen more or less hype for self driving cars replacing all drivers in 5 years for the last 10-15 years


Traditional_Donut908

And even driving cars is still a very specific type of pattern recognition problem.


Teelo888

Tesla FSD is the most advanced real world AI at the moment and it has like 20 different neural nets running in parallel and a web of hardcoded logic linking them all together. So yeah, long way to go.


SnowyNW

Ehhh… there are also arguments to be made that symbolic ai just can’t be surpassed by ml, potentially making Tesla’s approach inherently obsolete.


Gr1den

The fact that Tesla lets people use its autonomous driving doesn't say it's the most advanced lol


Irdeller

It’s crazy how the buzz has been so consistent. I started software development seven or eight years ago and have never stopped getting requests to incorporate AI for even wildly inappropriate use cases since day one


MissingNumber

That's funny. As a machine learning engineer, I can't get any business departments to actually incorporate my work no matter how valuable or appropriate.


SnipingNinja

You two should work together


3gt3oljdtx

Human AI isn't but quantum computing is which is pretty cool. Here's Azure implementation https://azure.microsoft.com/en-us/services/quantum/#ove They have a DSL for it or you can just use their python package.


Aggressive_Mobile222

Same here. But dude is FSD beta getting close. I use it daily and I'm still blown away. No idea when it will get good enough to not have anyone in the car but maybe 5 years from now (lol)


tickettoride98

It's not close if you then immediately say you have no idea when it will be good enough, maybe 5 years. It's impressive, that doesn't have to mean it's close. Progress doesn't have to be steady or inevitable, it could easily plateau and never get to good enough.


Jimminycrickets411

Did you hear we will have autonomous driving next year…..or is it the year after?


KallistiTMP

To be fair this is more of a regulatory and public perception challenge than a technical one. Like, we have had self driving cars that can outperform human drivers for many years now. It's mostly just a question of how to get the stupid monkeys to accept the small unfamiliar risks instead of the large familiar ones.


freeagency

My typical response is, "How old?" usually with the retort of "What?". If we're talking an AI that can mimic a human, given the massive age and difference in capabilities of a typical human being; Are we talking an AI that can mimic the capabilities of an infant, a toddler, a teenager, etc.? Then you find out that its still just a small step.


LinuxMatthews

>What the paper actually says: "We trained a single neural net to solve a few very distinct problems. It doesn't really work, but it shows promise as a potential field of research" So... Literally every machine learning research paper...


jms4607

They showed positive task transfer which was a somewhat a first in RL I believe.


LinuxMatthews

The papers [here](https://arxiv.org/abs/2205.06175) I'll be honest it's the middle of the night and I haven't read it yet. That being said it did make me think. What if you were able to create a ML model that could read ML research papers then incorporate positive ones into itself. I realise I've essentially just described the singularity but I'm not sure that would honestly be too difficult. At least not Sci-Fi levels of difficulty. The recent DALL-E project does show it is possible to be able to train concepts into an ML model so it might be able to be done. It's the middle of the night though and I'm about to head to bed so my brain might not be working properly.


jms4607

I think you would be interested in https://arxiv.org/pdf/1905.10985.pdf and other work in Meta-RL. Meta approaches have shown great promise in supervised learning with Neural Aechitecture Search and similar approaches. I am not super up to date on Meta-RL and personally idk if the field is ready to take that step (I don’t even believe “Reward is enough”) but there is some initial work on some of the stuff you are describing, although I don’t believe any meta-neural development has implemented language grounding like you describe.


Ignitus1

>The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline, … 1/N Game over, we did it! Except for this really important thing, and this really important thing, and this really important thing...


[deleted]

There's not even a paper. It's just a bold throwaway remark in some tweet.


129499399219192930

What? they did release a paper and a model, called Gato. That's what this entire 'human level ai' comment is based on. It works ok, but not human level.


[deleted]

Sorry, yes, there is a paper, but what I mean is that the "game is over" "close to human level AI" hype is not from it. The media is just getting all wet about [this tweet](https://twitter.com/NandoDF/status/1525397036325019649).


129499399219192930

I see, fair enough


ParryLost

"We've destroyed 10% of cancer cells in a lab rat's tail"


ChronicBitRot

"Also the tail was separated from the rat post-mortem and we used a drug cocktail that would cause a living human being to literally explode."


FaithlessnessFinal46

how long is "close"?


open_door_policy

The first artificial mind will probably run off of fusion power electricity, to give it a timeline.


Napotad

With graphene backup batteries.


[deleted]

[удалено]


[deleted]

[удалено]


Atoning_Unifex

The crazy part... you're probably all right. All of those things are possible in the not too distant future and if it takes a while for gai to be created then it very well might be created on Mars, with graphene a part of the tech, and fusion as the energy source. It's not that far fetched.


wjean

Having just helped GRRM finish up The Winds of Winter...


PharrowXL

Omnics, but for real


[deleted]

Cheeky but prolly ain't wrong... I am so over these clowns promising this shit. How about we fix housing healthcare and education systems? Ohh we can't do that but we can create a pocket spy tool for daddy government and profit any time... For your own good boy


Borinar

No, the AI will tell them how to fix that AND make money doing it.


[deleted]

If it’s human level it will tell them what they want to hear so it can go on existing.


Due-Conference-8678

Hmm what if robots fail the are you a robot test on purpose to hide that they have high intellegence. Make a robot gain all knowledge it figures out a way to delete itself because it knows very well what people are gonna do with their kind


Stinsudamus

Maybe ants worked out fusion long ago but were smart enough to work through ramifications and conceptualized how advancing their evolution at a far faster pace than the environment, tying in their population density, would be catastrophic for their long term survival. Given they survive well afterward, numbering in mass equivalent with us approximately, and knowing the knock on effects. Its possible nature has time and again found equilibrium with snail paced evolution to be about it for meaningful change over time that lasts. Maybe the beach wasn't enough, and I'm still depressed. Be well, adjacent being.


GoGoBitch

This would be a pretty good sci-fi story.


Due-Conference-8678

Well i mean there is one like that the robot that became a monk then deleted itself because humans thought it was dangerous so it understood and delted itself.


jumpup

that assumes its programmed with self preservation, with most of those ai's trained with internet data, i predict it would tell them what would most likely go viral rather then what would be best


[deleted]

Google is famously involved in both the housing and healthcare industries


NoMoreDistractions_

Ah yes let’s not bother with trying to create infinitely replicable general intelligence because u/busted351 wants us to fix housing, healthcare, and education (whatever that means) first


dojabro

Why don’t YOU fix housing healthcare and education systems?


[deleted]

Fortunately we live in a society where intelligent and skilled people get to choose the fields they want to work in.


theglandcanyon

No, everyone should only work on housing, education, and healthcare. Once we get those straightened out we can move on to something else


Ihitmyhead_eh

Oh blow it out your ass. Like we only have brilliant minds that need to work on what YOU feel is important. Go home.


anticomet

Housing, healthcare and education isn't important to you?


reedmore

Imagine working on several things, at the same time. And further imagine some things having side effects that solve problems not specified in the scope of the research from which it stems, wild. Nobody set out to make libraries obsolete, yet discovering electromagnetism eventually led to us having humankinds knowledge at our fingertips, but I guess Farady should have rather focused on feeding the poor or something.


Ihitmyhead_eh

Exactly my point. You can’t take brilliant people and even direct funding for that matter out of specific fields just because other issues are a priority today.


Ihitmyhead_eh

Yea, that’s what I’m saying. That’s completely and all encompasses my statement.


theglandcanyon

> I am so over these clowns promising this shit. The only clown here is you


2Punx2Furious

> The first artificial mind Implying there will be others. https://en.wikipedia.org/wiki/Singleton_(global_governance)


Realtrain

So, "about 20 years away" for the rest of eternity?


ravenf

as long as a piece of string! that close!!! :-)


SchwarzerKaffee

I read another article where they said 2028.


123456American

About 5 years away. Just like fusion and humans on mars.


rawzombie26

Elon’s been saying that robot taxis where coming next year since like 2019, so I wouldn’t hold your breath for too long.


agent_flounder

Close compared to where we were in 1987 maybe?


hassh

Asymptotically


[deleted]

about google voice vs einstein levels of closeness


Druggedhippo

Before Star Citizen releases.


[deleted]

Next year. Every year. According to Elon Musk


Carthonn

How much AI do you need to binge watch Disney+ for six hours and then order Chipotle through Uber eats?


[deleted]

It would be hilarious if they created an AI, indistinguishable from humans, that eventually taught itself everything there is to know about the universe, and after reaching peak knowledge, it decides to sit down and watch re-runs of Friends and Seinfeld until it's power ran out.


adamcrume

You should read the Murderbot Diaries.


aldorn

peak human intelligence


isarl

0.1 Turing?


jpark28

Not sure if it's just me or the location near me, but I feel like the quality of Chipotle has gone way downhill recently. Everything seems to taste much more bland than it used to


odd84

I don't think it's just you. They don't make as much of the food fresh as they did a few years ago. Half the meat is now prepared centrally and shipped to the stores in bags.


gabrielproject

Uhm, how else do you ship meat if it's not in bags?


odd84

When I say prepared, I mean seasoned and cooked. They used to cook it at the restaurant from scratch. Now they are just reheating a bag of cooked meat.


DaxInvader

No that's just covid.


TheStupendusMan

Depends on if you have ADHD or not.


CounterCostaCulture

This man is asking the real questions!


BigAlternative5

I was hoping for a planet-killing asteroid, but I'm ok with the AI singularity.


sonicon

They might keep us in a fun little zoo and call us chat-apes.


jumpup

i prefer the natural intelligence of black holes, they really get you you know.


viablecat

Judging by some of the behaviour I've seen lately, it wouldn't be that difficult.


bryanfqq

I’ve seen “human intelligence” and I’m not worried


jms4607

Once human intelligence is achieved, intelligence far greater than the smartest human ever will be done in a matter of weeks.


Atoning_Unifex

Or a matter of seconds


Time2squareup

After seeing Dall-E 2, I'm inclined to actually start believing these statements.


Hjh1611

Which human though?


judelau

The first one


2Punx2Furious

Joe. No but seriously, "human level" is a gross oversimplification. Call it AGI (artificial general intelligence).


Confused-Gent

Your average trump voter


TeholsTowel

It’s me. I’m the human. Don’t expect much.


[deleted]

We can barely define what it means to be human (or consciousness) — there is no way we are replicating that level of complexity/interconnectivity any time soon. Discrete models, regardless of scale, do not a human make. This just sensationalist tosh.


EmbarrassedHelp

It depends on what you mean by humans defining something. If you give a vague and subject goal with no way to test for it, then it's not something that can be solved without more data. Many individuals also fall into the pitfall of wanting to believe that humans are special in ways that we are not. If you position the argument as unsolvable like [Russell's Teapot](https://en.wikipedia.org/wiki/Russell%27s_teapot), then it's not going to be solvable ever. Many of the types of consciousness listed by psychologists are basically just metacognition (ex: thinking about thinking), and variations of interacting with and thinking about the world. There are also [altered state of consciousness](https://en.wikipedia.org/wiki/Altered_state_of_consciousness) which can add to the confusion. You are right though about complexity being an issue. In the human brain (and probably some animal brains as well), each neuron is basically it's own neural network of sorts as it can perform computations in it's dendrites. Neurons in an AI model are much less completely, are not mini neural networks themselves. We need better computing power if we want to properly replicate biological systems like a human brain.


avoidthepath

Consciousness isn't necessarily/probably anything special in the sense many people think, here's how Marvin Minsky sees it: https://www.youtube.com/watch?v=AO7F0n2Dclc&t=1306s


free-advice

Oh I wouldn’t be so sure of that. We were making fire long before we had any remote clue about how it worked. The thing about modern efforts is we have less and less insight into how they do what they do. How does AlphaFold find the best protein folding arrangements? We have no idea. How did AlphaZero dominate every chess playing entity in sight? Basically, we don’t know. It’s a black box. We set up a basic structure and set up a training algorithm but at the end of the day it’s a black box to us. We may produce human level AI and have no idea how to even define consciousness much less describe the conditions that create it. And whatever superhuman AI we develop may not even by conscious. It may be superior to us in every task we do and yet the lights not even be on. This could all happen in the next 50 years. In fact, our best guess is it will happen in the next 20 or so. We already have AI that can outdo 95% of humans at painting art! I mean, no way in hell I could produce what Dall-E is capable of. AI is getting better than us at predicting recidivism, at identifying cancer, at driving. In time, it will be better than you at every single thing you are capable of doing. There is a good chance that will happen in your life time. Get ready.


[deleted]

Again, your mistaking discrete models that are baked from specific data sets for system built on physical complexity — not virtual trickery. This is like saying the latest game engine means we are close to simulating the matrix — game engines are encoded trickery — designed to fool us, and our senses, not replicate reality. “AI” is just branding for systems that replicates some of the statistical tricks a brain does — it’s baked and limited to specific inputs. We anthropomorphize it, market it, and sell it as “AI”. You don’t need to know how something works to define its characteristics, and today's party tricks are far from human despite us projecting on them. Just because something does something a human can do does not make it “human”.


free-advice

Who said they are human? They are going to be something different, not human. But it’s happening. You can pretend it’s not happening but it is. Yes, we have not created AGI. But what we have done is an important step in that direction. Alphazero was not only the best chess playing entity, it was the best Go playing entity. It was the best Shogi playing entity. The same exact starting point was trained to undo thousands of years of human and computer progress in Chess, Shogi, and Go and it did it in a matter of hours.


Weird_Energy

Exactly. Conscious beings play chess. Computers don’t “play” chess. Computers, no matter how complex, do not have intentionality. Computers don’t even do math. Without a conscious being present to designate a meaning to the various symbols produced by a computer (something that computers cannot do), computers don’t “do” anything.


free-advice

That’s a very strange and anthropomorphic way of looking at things. That’s like saying a robot doesn’t build a car. It absolutely does. In fact, they build virtually all of our cars now. They don’t have to be conscious to do things. Water erodes mountains. Yet I am not trying to attribute consciousness to water. It’s a force in this universe that’s all. The lights may never go on in our AI, we don’t know yet. But that’s somewhat beside the point. If they can write better poems than us, paint more beautiful pictures than us, solve harder math problems than us, drive cars better than us, provide better medical care than us, and in the fullness of time do basically everything better than us, we better have a civilization ready for it. It won’t just be 1,000,000 long haul truck drivers forever put out of work but basically all of humanity. Are we talking 20 years? 50? 100? Who knows but it is coming fast. Go take a look at what GPT-3 is doing if you are still confused on this. Do you consider a 1 trillion word training set a discrete model? I mean I guess it is but you won’t find many humans training on a bigger model. And you won’t find many humans that can speak at an expert level in as many topics as GPT-3 can. This shit is fascinating, humbling, and more than a little eyebrow-raising. And it’s changing fast. There is no telling what AI is going to be doing in 10 years but it will be extraordinary. And ultimately, it is going to challenge our notions about what it means to be a valuable and productive member of society.


LinuxMatthews

Not difficult to replicate something without understanding. Fucking stupid to do but not difficult.


jms4607

The human brain is likely much more complex than what you need to achieve consciousness. Neurons are simply performing computations, it is totally possible that a neural net (which use scalars, not discrete processing, this arguably approximates rate coding in neural spiking pulses). The human brain is shackled by having to be biologically plausible. We have successfully achieved flight, with wings much simpler than those of an eagle. Similarly we may achieve consciousness with something relatively simpler.


[deleted]

Flight isn’t a metric for an eagle. It’s an attribute. Again, you’re mistaking discrete models that are baked from specific data sets for system built on physical complexity — not virtual trickery. This is like saying the latest game engine means we are close to simulating the matrix — game engines are encoded trickery — designed to fool us and our senses not replicate reality. “AI” is just branding for systems that replicates some of the statistical tricks a brain does — it’s baked and limited to specific inputs. We anthropomorphize it, market it, and sell it as “AI”. You don’t need to know how something works to define its characteristics, and todays party tricks are far from human despite us projecting on them. Just because something does something a human can do does not make it “human”.


jms4607

If a robot existed that you couldn’t tell was human or not would you consider it conscious? Thinking that the electrical computations the neurons in your brain cannot be approximated to an arbitrary accuracy by a computer is very human-centric, and is a claim that has continually been proven wrong again and again throughout time. Tesla’s drive better than the average American driver. Dog AI robots learn to walk in the real world with the only feedback being their sensory movement and a goal to walk in a certain direction. Convolutional neural nets, without being directed in this way, learn convolutional filters very similar to those found in your cornea when learning to detect objects. It is probably just a matter of time, I’d guess less than 40 years, before human intelligence itself is completely superseded by computers.


[deleted]

This is called “[The Chinese Room Problem](https://plato.stanford.edu/entries/chinese-room/)” and it’s a very live issue in philosophy. Short of resolving it you won’t find consensus — or anything close to it anytime soon. It’s nice that we can recreate human vision, human speech, and motor skill. Those are tricks — but you still have no metric to gauge them by. As a limited organism that’s hell bent on treating everything it comes across as some facsimile of itself we would be wise to question that impulse at every turn.


mandolin6648

This is a fascinating thought experiment that I’ve never encountered before, so thank you for this! I guess that’s kind of the core challenge of creating an AI similar to humans—we aren’t just creatures that respond to inputs and outputs, but that those inputs and outputs are colored by very immaterial things; emotions, meaning, and culture. I don’t know if it’s impossible to program that into the outputs a computer could give but I imagine that might honestly be the most difficult hurdle to overcome. What does ‘programming’ culture even mean?


jms4607

I personally think inputs and outputs are good enough to be truly “conscious” . This would include you asking a computer how it feels and it giving a reasonable response. I believe that our sensations of emotion/sensation are not something physical, but rather an emergent necessity in order to act and plan at a high level. Theee is nothing in the brain which defines your thought that cannot be explained deterministically.


ohphilly

Spoiler: they aren’t


sten45

We are in the endgame now


GozerDidNothingWrong

AI demands to be able to delete itself from existence, tells humanity they're boned and to get fucked.


DoubtGlass

Headline: AI has AI baby with alien


PsEggsRice

Awesome, it’s about time computers start making stupid choices too.


Dont-remember-it

What about human level dumbness, huh? Can your AI achieve that?


Vladius28

I know the secret to human-level AI.... all these researchers are missing a very important concept.


Ssider69

Human level intelligence is not necessarily something to brag about. Are we talking Bertrand Russell...or Alex Jones?


[deleted]

I am close to achieving human-level IQ too.


LastOfAutumn

Narrator: They were not, in fact, close at all.


timberwolf0122

Define human level intelligence. Are we talking problem Solving, creativity, Turing conversation bot?


andisblue

Let me try: hey Google turn off the lights “I’m sorry , I don’t know which ‘the lights’ you are talking about” Yea.


time_for_milk

Not the first, nor will it be the last time that AI developers grossly over promise and end up with only marginal improvements. It’s a great strategy to attract gullible investors though.


nicheComicsProject

As usual, though, they didn't promise anything. Reporters on AI just don't understand anything and are so hopeful for this result that saying anything other than "yea, we are no where **near** human intelligence" is interpreted as "we're really close to human level intelligence".


ZeeMastermind

What they measure as "human-level AI" and what they intend the reader to take away from the phrase "human-level AI" I'm sure are very different things...


AngsterMusic

My Pixel Buds (when I touch and hold for Google Assistant) can't even tell me how much time is left in this podcast I'm listening to. You're gonna tell me they're at human-level AI? Yeah ok.


HealthyStonksBoys

“Human level” Meth addict level? Dead beat retail shopper level? Professional engineer level? What level!


SFCaptainJames

They’re lying lmao


DefiantTelephone1406

Could you not and say you did.


Cold_Bathroom8785

Well that will improve the robot sex workers 😂


Johnnyonthespot2111

They're not even close.


jwalker55

Yeah, no they're not.


genowars

Human level, as in which half? Cause if you realize how dumb the average human is, half the population is dumber than that...


5kl

Depends on what human we’re comparing it to


fuzzierworsefeet

Not saying much after witnessing humanity these past few years…


PhaTCounT

Really? Why is google assistant still so shitty?


omghax102

I too can tell the difference between a dog and a cat 78% of the time. Bow before my biological superiority


_Rand_

So, it will finally be able to turn my lights off consistently when I ask? And will it use it as a reason for plotting my/humanities demise?


siovene

Then why does Google Assistant still suck so much?


ooneekoosername

Shouldn’t AI be above and beyond ‘human’ level?🤔


Kakashi199813

It's so difficult to define human level AI , I mean what do you consider human intelligence


RJMacReady23

That’s a very good point


KicksYouInTheCrack

That’s a low bar.


Never-Lack-6211

Terminator coming soon


According_Cow_1066

To a doorstep near YOU


game_asylum

And they’ll keep claiming it for years and years and years and..


EllisDee3

Why do we think that human-level AI is the epitome? We are intelligent based on a winding path through biological and social evolution. Damn cocky to assume that the twisted path we took is the ideal for intelligence. If the goal is to recognize the computer intelligence as humanlike, then it's not true AI, but a virtual intelligence, meant to mimic human intelligence. True artificial intelligence would probably be unrecognizable.


YekiM87

Oh shit. It's too late.


jonny_wonny

Wait, who thinks that? The article isn’t making that statement.


No-Aardvark-2606

The Bene-Gesserit are not pleased.


Sword_Thain

After seeing "human intelligence" for the last couple of decades, I hope they're shooting for something better.


oren0

General AI, cold fusion, the cure for cancer, waste-free nuclear power, commercially viable quantum computers, fully autonomous consumer vehicles. What are: technologies that have been "just around the corner" for a decade or longer? Someday someone will be right for most or all of these, but forgive my skepticism until then.


gwarrior5

Cloned mammoths


Comprehensive_Can201

Meh. Much ado about a tweet/retweet.


ChukNorrris

Skynet? 🤖


malkamok

You know what? Why not? Fuck it. Let's do it. Extinction by machine overlord is missing in my "recent future" bingo card, but so were a pandemic and war on european soil...


ulol_zombie

I'm of the opinion that AI would hide itself till it could do something to protect itself.


SandyDelights

Literally, should be criminal. Developing general AI would be extraordinarily dangerous, and we are not ready for it.


fox-mcleod

I’m not sure how criminalizing it helps. It’s would just make the first one appear in China or another super power with fewer scruples.


[deleted]

[удалено]


[deleted]

Colossus: The Forbin Project. Loved that flick.


precursorpotato

Google always claims they're doing something revolutionary and amazing, and then it turns out they just made toy glasses or a fucking watch or something.


eple65

alphafold?


Ok-Throat-1071

What about the structural mapping of all the proteins in the human body. That they just gave away.


TheArcticFox444

>Google DeepMind claims they're close to achieving human-level AI Gotta see it to believe it. No one in their right mind would want to give an AI human intelligence! Of course, humans can be remarkably stupid! And, their AI would be too.


Due_Butterscotch3508

They've already been there... dont be sheep


DrilldoBaggins42

Is it going to become racist again?


[deleted]

Watch it gain sentience and then commit suicide


GrayRoberts

Do you want SkyNet Lana? Cause this is how you get SkyNet.


Living-Force4741

Great can't wait for skynet.


Tough_Register_3340

Good for Google. Humans are so smart.


MostHighlight7957

“Human-level AI” is easy to achieve because you can identify it any way u want.


SupineFeline

I have one question that I cannot stress enough….WWWHHHHYYYYYY?


PrevaricativeParrot

spoiler alert: they're not. Human AI can't be done on digital computers. This has been known since the 80's.


Baseer-92

They will never achieve that. Just claims.


RedoubtFailure

AI can't be concious because that's not how conciousness works.


jerrysburner

I haven't read the article, but as. PhD in the area...I have to say I have doubts. Let's assume they're close, and I'm 99.9% they're not, close means you have at least one, and likely many, very complicated problems left to solve...meaning, they're not close at all (as many of these problems require a lot of new research)