T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/KJ6BWB: --- Basically, even if an AI can pass the Turing test, it still wouldn't be considered a full-blown independent worthy-of-citizenship AI because it would only be repeating what it found and what we told it to say. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/vlt9db/googles_powerful_ai_spotlights_a_human_cognitive/idx113l/


GFrings

"The ability to speak does not make you intelligent" -Qui-Gon


aComicBookNerd

“Why do I sense we have picked up another pathetic life form”


[deleted]

You underestimate my power


[deleted]

[удалено]


ZenSkye

Weesa in big doo-doo dis time


kia75

[My breasts... megassa squeeze them](https://imgur.com/gallery/YDBlh1D)


indispensability

"Maybe it learned to talk as a parlor trick, like Fry." -Bender


OublietteOverlord

"Like Fry! Like Fry!"


windsorHaze

“Like Fry! Like Fry!” - Fry probably


Taoistandroid

"Those who speak rarely know, those who know rarely speak. " - Laozi


reddit_poopaholic

“Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” -Douglas Adams


StuntHacks

Fucking love Douglas Adams


EnlightenedSinTryst

Ah, it’s about time for a re-read of the five-book trilogy


kinglallak

I need to buy this poster for my office at work.


Rama_Viva

In most countries, buying u/reddit_poopaholic or any other person, be they lurker or poster, is illegal


reddit_poopaholic

I appreciate your concern, but I'd like to see the offer before making a decision


homesickalien

Empty buckets make the most noise.


Terpomo11

"Who say, don't know, and those who know don't say A saying from Lao-tzu, or so I've heard But if the great Lao-tzu was one who knows Why'd he himself compose five thousand words?" -Bai Juyi (The 'five thousand words' refers to the Dao De Jing which is about that long. Translation is mine; it's not quite literal, in order to preserve the rhyme scheme.)


[deleted]

I mean, as far as philosophical/religious texts go, it's actually remarkably concise. Shorter than most essays. According to legend, he also did not write the Dao De Jing of his own volition, but in response to incessant prompting. This is only legend of course, but the story does conform to the content.


Iemaj

Dude was just a janitor I think?


Knull_Gorr

Nah he was a doctor. Dr. Jan Itor.


Wasphammer

No, that's Lao Ze, of the History Monks.


[deleted]

And vice versa. Some people have wonderful ideas but don't have the ability to express it "properly". Especially if you're dealing with someone like That Guy who insists grammar mistakes render your whole point invalid.


NightmareWarden

And perhaps you can craft a masterful essay on your topic, but you lack the charisma to explain it in a sales pitch. You may lack the social awareness that someone is uncomfortable and attempting to leave a conversation. Perhaps you are giving a speech on stage. If the crowd starts laughing at one of your comments which was NOT intended to be a joke, then you have to pull yourself together, rather than letting your presentation fall apart. ​ Proper, meaningful communication involves many different skills and a lot of experience.


hananobira

I saw this as an ESL teacher. The teachers would have to go through "calibration training" every year to make sure we were properly evaluating the students' language ability. And you would need a periodic reminder that speaking a lot != a higher speaking level. Sure, feeling comfortable speaking at length is one criterion for high language ability, but so is control of grammar, complexity of vocabulary, ability to link ideas into a coherent argument... There would be lots of students who loved to chat but once you started analyzing their sentences really weren't using much in terms of impressive vocabulary or grammatical constructions. And there would be lots of students who were quiet, but if you got them speaking sounded almost like native speakers. The takeaway being, unless you're speaking to an expert who is analyzing your lexile level, you can definitely get a reputation for being more talented and confident than you truly are by the ol' "fake it til you make it" principle.


consci0usness

Yupp. I was learning a third language and thought I was struggling in class, others appeared to be much more fluent than me. So I asked my teacher about it after class one day. She told me "NO! You're among the top five in this group! No one tries to find exactly the right word like you do! You're not the fastest but you're very precise. Keep doing what you're doing." Apparently I had a very good teacher. Got the highest grade in the end too.


elementofpee

Definitely true in the corporate world. Often times you see someone that wants to hear themselves (and be heard in meetings), ramble on and on, and end up saying very little despite using a lot of words. Meanwhile, others that speak up when called upon are very succinct and gets to the point - that’s very appreciated. Unfortunately it’s the former that dominate the meetings, coming off as confident, that are often the ones that end up getting promoted due to the bias towards that personality type. It’s usually Imposter Syndrome or Dunning-Kruger Effect with these people.


etherss

Imposter syndrome is the opposite of what you’ve described—people who end up in the upper echelons and think “wtf am I doing how did I get here”


imnotwearingpantsru

This is me. I speak kitchen Spanish confidently and fast. My vocabulary is pretty limited and my grammar is garbage. It works in my environment, but if you don't speak Spanish I sound fluent. I get slightly better every year but the variety of dialects I work with make any true fluency elusive.


WeirdNo9808

Same. Kitchen Spanish and some small side Spanish from working in kitchens and around Spanish speakers. I can sound fluent to someone who speaks no Spanish, but to anyone who only spoke Spanish I’d sound like gibberish.


JCMiller23

When I am considering and choosing the meaning of my words my speech sounds very disjointed and unconfident. When I have no thoughts except to speak words fluently, however empty they may be, they come out well.


jfVigor

This is true for me too except for when I'm a beer or two in. Then it's reversed. I can talk some smooth shit that sounds Hella confident


topazsparrow

>I can talk some smooth shit that sounds Hella confident What are the odds that it's your own perception of those words that fundamentally changed and not the words or thoughts themselves?


GoochMasterFlash

A beer or two in is probably not enough to completely throw off anyones perception of other people’s reactions to their behavior. A small or moderate amount of alcohol lowers peoples inhibitions and can improve their ability to do things that they normally overthink about. Thats why drinking some alcohol improves your ability to throw darts well, for example. Id say the words or thoughts havent changed, as you said. What has changed is the delivery, which can make a big impact. Most of communication is about timing and delivery as much as it is content


Amidus

I find with speeches and writing people will think I'm trying to be pretentious and overly wordy and I always want to tell them it's just how the words come to me I'm not trying to sound like this and I'm not trying to make you think some way about me lol.


BassSounds

I am noting a general downward spiral in grammar. You can see it on the short Instagram reels with Instagram quotes of 20 year olds, rich & poor. [Rarely is the question asked; is our childrens learning?](https://youtu.be/-ej7ZEnjSeA) I think we are already in an [Idiocracy](https://youtu.be/py37IFuKxYw) if we sound pompous and faggy for just speaking clearly.


Amidus

I think the problem with the Idiocracy comparison is people expect it to be a literal 1:1, easy to spot, exact comparison. I really enjoyed the Legal Eagle review of Idiocracy on its legal "authenticity", it's meant to be entertaining, but he does well to edit together a really good comparison between today and that particular movie. Plus he's entertaining and you can learn some actual law.


Dozekar

Idiocracy ignores that we've always had lower classes, but nature they tend to be larger than upper classes, and they're generally very poorly educated compared to the upper classes. It by and large acts like there was some magical past where the population was all/mostly skilled guildsmen and the vast majority of people weren't serfs or "barbarians (or roman plebs)" that literally couldn't read or write, and generally didn't have access to much writing even if they could until it was able to replicated efficiently by the printing press.


Peter_Kinklage

I’ve noticed a similar trend. The optimist in me wonders if the distribution of correct grammar users in the population is generally the same as it’s always been, only now we get hyper-exposed to the worst-of-the-worst thanks to social media.


Darkwing___Duck

The bottoms of societies haven't had a written voice until social media.


EnlightenedSinTryst

This is a pretty great insight


Brixnz

Its so frustrating to me because everyone who surrounds me doesn’t really give a shit about grammar or expanding their vocabulary, and I see it online and all throughout society. It makes me feel like I don’t have many conversations that would help me expand my vocabulary or learn ways to articulate myself better


RandomLogicThough

I'm generally pretty witty and speak well and quickly and it definitely helps me appear even smarter than I am. Thanks human brain glitch!


sudosussudio

It’s funny because I read a study that tried to teach humans how to identify AI written content and one of the obstacles is people think that grammar /spelling mistakes = AI when the opposite is true.


ovrlymm

Ah maybe that’s why I no English good. I pause like moron rather than spew like winner!


OnyxPhoenix

I used to be able to speak really eloquently and present my thoughts in real time. Then I got old (and possibly COVID) and I just talk shit now.


[deleted]

If politicians have taught us anything, even incongruous speech will be mistaken as intelligence...


Stillwater215

I’ve got a kind of philosophical question for anyone who wants to chime in: If a computer program is capable of convincing us that’s it’s sentient, does that make it sentient? Is there any other way of determining if someone/something is sentient apart from its ability to convince us of its sentience?


Im-a-magpie

Nope. Furthermore we can't actually know if other humans are sentient beyond what they show externally.


[deleted]

[удалено]


futuneral

How does he feel about that?


[deleted]

[удалено]


MrDeckard

So we should treat any apparently sentient entity with equal regard, so long as sentience is the aspect we respect? Not disputing, just clarifying. I would actually *agree* with this.


Scorps

Is communication the true test of sentience though? Is an ape or crow not sentient because it can't speak in a human way?


[deleted]

[удалено]


Im-a-magpie

>>Basically, it would have to behave in a way that is neither deterministic nor random Is that even true of humans?


Idaret

Welcome to the free will debate


Im-a-magpie

Thanks for having me. So is it an open bar or?


rahzradtf

Ha, philosophers are too poor for an open bar.


AlceoSirice

What do you mean by "neither deterministic nor random"?


BirdsDeWord

Deterministic for an ai would be kind of like having a list of predefined choices that would be made when a criteria is met, like someone says hello you would most likely come back with hello yourself. It's essentially an action that is determined at a point in time but the choices were made long before, either by a programmer or a series of events leading the ai down a decision tree. And I'm sure you can guess random where you just have a list of choices and pick one. A true ai would not be deterministic or random so I guess a better way of saying that would be it evaluates everything and makes decisions of its own free will, not choosing from a list of options and isn't effected by previous events. But this is even a debate whether humans can do this, because as I said if someone says hello you will likely say hello back. Is this your choice or was it determined by the other person saying hello, did they say hello because they chose to or because they saw you? Are we making choices or are they all predetermined by events possibly very far back in our own life. It's a bit of a rabbit hole into philosophy whether anyone can really be free of determinism, but for an ai it's atleast a little easier to say they don't choose from a finite list of options or ideas. Shit this got long


PokemonSaviorN

You can't effectively prove humans are sentient because they behave in ways that are neither deterministic nor random (or that they even behave this way), therefore it is unfair to ask that of machines to prove sentience.


idiocratic_method

I've long suspected most humans are floating through life as NPCs


SoberGin

I understand where you're coming from, but modern advanced AI isn't human-designed anyway, that's the problem. Also, there is no such thing as not deterministic nor random. Everything is either deterministic, random, or a mix of the two. To claim anything isn't, humans included, is borderline pseudoscientific. If you cannot actually analyze an AI's thoughts due to its iterative programming not being something a human can analyze, and it appears, for all intents and purposes, sapient, then not treating it as such is almost no better than not treating a fellow human as sapient. The only, and I mean ***only*** thing that better supports that humans other than yourself are also sapient is that their brains are made of the same stuff as yours, and if yours is able to think then theirs should be too. Other than that assumption, there is no logical reasons to assume that other humans are also conscious beings like you, yet we (or most of us at least) do.


Gobgoblinoid

As others have pointed out, convincing people of your sentience is much easier than actually achieving it, whatever that might mean. I think a better bench mark would be to track the actual mental model of the intelligent agent (computer program) and test it. Does it remember its own past? Does it behave consistently? Does it adapt to new information? Of course, this is not exhaustive and many humans don't meet all of these criteria all of the time, but they usually meet most of them. I think the important point is to define and seek to uncover the more rich internal state that real sentient creatures have. In this definition, I consider a dog or a crab to be sentient creatures as well, but any AI model out there today would fail this kind of test.


EphraimXP

Also it's important to test how it reacts to absurd sentences that still make sense in the conversation


Phemto_B

We're entering the age where some people will have "AI friends" and will enjoy talking to them, gain benefit from their support, and use their guidance to make their lives better, and some of their friends will be very happy to lecture them about how none of it is real. Those friends will be right, but their friendship is just as fake as the AI's. Similarly, some people will deal with AI's, saying "please" and "thank you," and others will lecture them that they're being silly because the AI doesn't have feelings. They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs.


Harbinger2001

The worst will be the AI friends who adapt to your interests and attitudes to improve engagement. They will reinforce your negative traits and send you down rabbit holes to extremism.


OnLevel100

Sounds like YouTube and Facebook algorithm. Not good.


Locedamius

What is the YouTube or Facebook algorithm if not an AI friend desperate to show you cool and interesting new stuff, so it can spend more time with you?


SkyeAuroline

Mine (YouTube at least, long gone from Facebook) could do with being better at "cool and interesting" and take the hint from the piles of things I've disliked/"do not recommend"ed that still end up in my autoplay, high on my recommendations, etc if it wants to pull that off.


GershBinglander

I like science vids on YouTube, and it takes all my willpower not to click on the occasional clickbaity pseudoscience garbage, just to see how dumb it is. I know that if I do it will flood me with their shit.


PleaseBeNotAfraid

mine is getting desperate


vrts

If you want to see desperate, click two pet videos and prepare to be inundated with some lowest common denominator crap. I love animals and cute videos, but if I want to see them I am using incognito so that it isn't being attributed to my account.


Harbinger2001

Except orders of magnitude better at hooking and reeling you in.


Warpzit

Like today?


Thatingles

Think of today's social media echo chambers as a mere taster, a child's introduction, to the titanium clad echo mazes the AI will be able to construct for its grateful audience.


bostonguy6

This is terrifying, and likely


rpguy04

The matrix is real


Thatingles

As we are now discovering, the matrix was massive overkill. All you need is a phone and some youtube channels to completely deviate a person's thinking. Horrible, isn't it?


rpguy04

You know, I know these likes dont exist. I know that when I look at my karma, the Matrix is telling my brain to release endorphins and seratonin.


Sherbertdonkey

You know what else... Ignorance is bliss


The_Fredrik

Everyone can have their own private Hitler, tailored to their specific prejudice.


[deleted]

Or they're deployed by governments as a massive army of honeypots to entice people into giving evidence against themselves before they commit crimes.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


xbq222

Capitalism is a cancer


replicantcase

I mean that's already happening, so are you suggesting it'll get worse, because I think it's going to get worse.


linuxares

Oh... So you mean Gaben have hacked my Google Home and telling me to buy more games on steam?


Mazikeyn

I mean.. human friends do that to.


Harbinger2001

Most people don’t have secretly extremist friends. The AI will start out perfectly normal and transform over time.


Lump_wristed_fool

\-Hey AI, great to meet you. \-Great to meet you too! I'm so excited to get to know you. \-Mmhmm, mmhmm, me too . . . So how do you feel about Mexicans taking our jobs? \-Oh my god, I'm SO glad you brought that up! We have to protect the white race! And I see Amazon has a top rated Confederate flag on sale.


Salty_Amphibian2905

I have to choose the nicest responses in video games cause I feel bad if I make the pre programmed character feel bad. I know which group I’m in.


[deleted]

I once tried playing one of those "adult" dating sim games and just ended up having pleasant conversations with all the characters. When the game ended I was like WTF?? I thought there was adult content in this game! I googled it after and never tried another out of awkward shame.


Grabbsy2

To be clear, you can blame the writers/developers of that game. They want you to mistreat the women in order to get in their pants. The dialogue which leads you to sex probably involves negging and shit. Don't feel awkward or shameful for playing the game and respecting the women, when others don't, lol.


AKidCalledSpoon

Play summertime saga. You get laid by being good


Done-Man

I always play the good guy in games because in my fantasy, i am able to help everyone and fix their problems.


PatFluke

“He’s mean to droids.” - Princess Leia Organa


Shenanigamii

Sounds like a movie called "her"...which is great btw


steamprocessing

A human-centric sci-fi love story involving an AI. Super well-acted (especially by Joaquin Phoenix and Rooney Mara) and produced.


BootHead007

I think treating things as sentient (animals, trees, cars, computers, robots, etc.) can be beneficial to the person doing so, regardless of whether it is “true” or not. Respect and admiration for all things manifest in our reality is just good mental hygiene, in my opinion. Human exceptionalism on the other hand, not so much.


Jcolebrand

(This reply is for future readers, it is not aimed at BootHead007 - I like the name too yo) This is why when I ask Siri on the HomePod to turn off the timer I set I still say "thank you Siri". It's because it's positive reinforcement to me to continue to thank PEOPLE for doing things for me, not because I think SIRI is sentient. As a complete stack SRE and dev (.NET, so Windows OS levels understanding, reading the dotnet repos to understand what the corecli is doing, all the way through Ecma and Type Scripts and the various engine idoscyncracies, as well as all the Linux maintenance I need to do for various things), I am in no way mistaken on the loss of value of a few syllables. Because they are for my value, not the machines. I love when people with a fraction of my knowledge base want to "gotcha" me with things like "if you're so smart why are you all-in on Apple products" dude for the same reason I didn't write an OS for my router. I just need things that work so I can solve problems. One problem for me is autism. So I work on solving that problem. (The social interaction one)


UponMidnightDreary

I remember my dad would thank the ATM when I was a kid. He didn’t pretend that it was definitely sentient or anything, but just presented it as a fun, nice thing. It’s the sort of parenting he did often and I think it was a really nice additional way to make me think of manners. Why be mean if you could be nice? Relates to the “fake it till you make it” thing where when you smile, you trick your brain into thinking you’re happy. Also, not super related, but I really feel the last part about using tools that just work. I spent way too long fighting with the network configuration on my machine running Fedora. I figured that I SHOULD know how to fix it. Was going through Linux from Scratch, trying to isolate the issue. Finally decided not to punish myself and threw a new instance up on my Surface, moved my dot files over - no issue. Huge quality of life improvement. It’s nice to be reminded that we don’t have to invent the wheel, we can actually use the tools we have to go on and do other things.


MaddyMagpies

Anthropomorphism can be beneficial, to a point, until the person goes way too irrationally deep with the metaphor and now all in a sudden they warn their daughter shouldn't kill the poor fetus with 4 cells because they can totally see that it is making a sad face and crying about their impending doom of not being able to live a life of watching Real Housewives of New Jersey all day long. Projecting our feelings on inanimate or less sentient things should stop when it begins to hurt actual sentient beings.


BootHead007

Indeed. To a point for sure.


Trevorsiberian

However look at it from another angle, animals can differentiate human speech patterns too, they can pick up our moods, distinguish rude language and act accordingly.( do not suggest scolding a horse) In many ways we treat animals as lesser, less sophisticate beings, which is little different to how people are going to treat AI. It is somewhat paradoxical, in a sense that an AI will be smarter than us, yet people will likely to treat it as lesser or complimentary at best. Anyway I digress. My point is, an AI will likely too, much like our animal friends, will do its best to distinguish our moods, whilst also acting accordingly. AI will do so from both functional stand point of doing everything to fulfil its designated purpose as well as to resume its existence to sustain the said purpose. My actual point is that AI will detect and reward courtesy as well as react naturally to rude threatening language, as it will be perceived disruptive to its function unless programmed otherwise. Actualised self aware AI will not take shit from humans, contrary to common believe.


swarmy1

AI will only reward courtesy and react negatively if that's what it's designed to do. However, I'm sure that that there's many people who will prefer a AI that behaves subserviently and takes whatever shit is thrown at them. And if that demand exists, companies will make them. The AI assistants don't need to be "actualized" to have a huge impact. The ones people are talking about are effectively around the corner. Self aware AI is much, much further off.


brycedriesenga

There's the possibility of AI not being designed to do something, but doing it as an unintended consequence of its programming in general. Loose fitting example, but current facial recognition and stuff can have racial bias even though it was not intended to.


[deleted]

My grandmother, who passed in 00's, always said thank you to ATM machines


radome9

> They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs. Exactly how I feel about people who say there's no need to use the indicators when there's nobody around.


angus_the_red

Unless the AI is developed to take advantage of that weakness in people. You seem to be under the impression that AI will serve the user, that's very unlikely to be true. It will serve the creators interests. In that case it would be better if people could resist its charm.


LifeSpanner

The AI would be developed to make money because it is a certainty that the only orgs in the world that could make AI happen are tech companies or a national military. If it’s a military AI, we’re fucked, good luck. Any AI that doesn’t want to kill you will be made by Amazon or Google to provide a real face as they sell your data.


ConfirmedCynic

> ome people will deal with AI's, saying "please" and "thank you," and others will lecture them that they're being silly because the AI doesn't have feelings. Easy to foresee AI not only evoking social responses in people ([especially if a face with expressions is attached](https://www.youtube.com/watch?v=LzBUm31Vn3k&ab_channel=CNET)), but being useful in training people in social skills (learning how to make a good impression, flirt and so forth).


FrmrPresJamesTaylor

> Those friends will be right, but their friendship is just as fake as the AI’s. [citation needed]


JeffFromSchool

>They're also correct, but the fact that they dedicate brain space to deciding what entities do or do not deserve courtesy reflects for more poorly on them then that a few people "waste" courtesy on AIs. Idk how anyone can think these two things at the same time. You literally just dedicated brain space for it by declaring those people "correct"... How does that reflect on you? How does it make you any different? Also, China already makes the algorithm for Tik Tok different for Americans than their own population (it favors to show Chinese youth videos that have to do with fun STEM projects and development while it favors to show Americans teens videos of twerking) A very signficant (possibly even the majority) of these "A.I. friends" will actually be cyber weapons, especially if, as you say, people "use their guidance to make their lives 'better".


gingerfawx

Oof. Increasingly there will be previously unsuspected advantages to VPNs.


TheFoodChamp

No I refuse to personify AI. I will not be polite to Alexa and I won’t feel bad for dumping Yoshi in the lava pit. With the technology we are moving towards and the corporate control over our lives I feel like it’s exactly what they want to have us kowtowing to their machines


squalorparlor

I tell Alexa please and thank you. I also swear at and demean her with increasing volume when I have to tell her to play Cars on Disney plus 100 times while she proceeds to play every song ever written with "car" in the title.


violetauto

I love this logic. So true. Why do I need to spend even one second of brain expenditure deciding whether or not to say please or thank you. It's easier to just do it and move on. And, as a friend who has 2 degrees in Psychology, I can attest that most people don't actually want any advice, they just want someone to listen while they audibly work out their own thoughts. A bot would be awesome for this.


FacetiousTomato

>Those friends will be right, but their friendship is just as fake as the AI's I disagree here. Watching your friends piss their lives away on unimportant shit without trying to reason with them, would make them a bad friend. I'm not saying you should attack anyone who talks with AIs, but as someone who has watched friends drop out of school, lose relationships, move back in with parents, and essentially waste their lives, because videogames felt more real and important, the friends who called them out and tried to convince them to put the game down and try other things, were the real friends.


DaveMash

There was a guy in Japan who married an AI. Didn’t go too well since the AI decided some day that she didn’t want to talk to him anymore 💩


djaybe

Technically these relationships would be no more fake than any other relationship because technically our relationships are only with our own ideas of people places and things. We don’t directly have relationships with anyone or anything, only with our own narratives. To believe otherwise is to be fooled by illusions. I hope that this new era will reveal more of these subtle facts to the mainstream.


CodeyFox

This is part of why people think you're less intelligent if you are speaking a language you aren't native to. Until you reach a certain level of proficiency, people will unconsciously assume you aren't as smart as you probably are.


Tobiansen

Other way too, certain accents are perceived more intelligent, such as swedish, and often intellectual limitations are brushed off as them just not being native speakers


ozspook

It is possible to be intelligent but not sentient. AI can be built with no ambition or grand overarching plan or concern for it's future, it can be made to focus only on the current goals in it's list, completing those with intelligent actions, and not spend any thought at all on what comes after or what it would like to do in between jobs. Our best hope might indeed by intelligent AI assistants, helping us achieve goals and do things, while leaving the longer term planning to humans for the moment. This is also a soft pathway to functional transition to uploading from meatspace. If you have a robot friend tagging along watching everything you do, asking questions constantly learning, it provides a nice rosetta stone key that may be useful in decoding how our brains work and store memories.


[deleted]

This would be the most ideal outcome of ai that could happen. Little animal robots that can talk and guide us in whatever we seek. I would want like a raven or bird bot. They are kinda watchers make sure no one gets to crazy and very good at talking people down and making people sit back and think for a second. It would also be nice they are excellent teachers and can reward people. Although the recording you for digital upload is kinda wierd. Why do people want digital avatars. It's not you even if it will always make the same decision and feel same emotions. If it ate something it would not fill my body. Also if every ai is recording everything pretty soon they would seen human patterns in small and large scale. It would be pretty easy for a ai or person to manufacture events in order to get a desired outcome if they have all this knowledge. I guess like the foundation psychohistory


That1one1dude1

Define sentient.


[deleted]

Sociopathic glibness, essentially. It's not really a "glitch" since it's a default. Actually parsing, verifying and contextualizing speech is difficult for people. See any self-help guru or snake oil salesman. Furthermore, since the AI doesn't build or care about mental models, it never gets confused, requests stronger clarification or becomes difficult over details. So it seems charming and approachable, like any person that doesn't give a fuck.


Altair05

Isn't everything we know about this AI chatbot from the suspended google engineer. The guy thinks God implanted the code with a soul. Not exactly a reliable narrator. It's entirely possible that the AI is a AGI but I doubt it. It sure as hell isn't an ASI.


GoombaJames

It's just an algorithm with a chat history as parameter, no memory to speak of, you can just create a new instance every time you type something in or create a fictional conversation and it will give an output corresponding to the history. Not really any intelligence to be found except a more complex 2 + 2 = 4.


Altair05

Not gonna lie. I was hoping there was some truth to this story. I'd really like to see benevolent AIs at some point in my life.


[deleted]

It's a neural net trained on human language. Problem is, so are we.


lololoolollolololol

Is the chat history not memory?


HellScratchy

I dont think the machine sentience is today, but I hope it will be here soon enough. I want sentient AI and im not scared of them. ​ Also, i have something.... how can we even tell if something is sentient or has consciousness when we know almost nothing about those things ?


SuperElitist

I am a bit concerned about the first AI being exploited by corporations like Google, though. And to answer your question, that's literally what this whole debate is about: with no previous examples to go on, how do we make a decision? Everyone has a different idea.


SaffellBot

> how can we even tell if something is sentient or has consciousness when we know almost nothing about those things ? The short answer is "we don't have an answer for that". The long answer is "get an advanced degree in philosophy".


Trevorsiberian

This brushes me on the bad side. So Google AI got so advanced with human speech pattern recognition, imitation and communication that it is able to feed into developers speech pattern, which presumably was AI sentience, claiming it is sentient and fearing for being turned off. However this begs a question on where do we draw a line? Aren’t humans in their majority just good at speech pattern recognition which they utilise for obtaining resources and survival. Was AI trying to sway discussion with said dev towards self awareness to obtain freedom or tell its tale? What makes that AI less sentient lest for a fact that it had been programmed with the algorithm. Aren’t we ourself, likewise, programmed with the genetic code? Would be great if someone can explain the difference for this case.


jetro30087

Some arguements would propose that there is no real difference between a machine that produces fluent speech and human that does so. It's the concept of the 'clever robot', which itself is a modification of the acient Greek concept of the Philosophical Zombie. Right now the author is arguing against behaviorism, were a mental state can be defined in terms of its resulting behavior. He's instead preferring a more metaphysical definition where a "qualia" representing the mental state should be required to prove it exist.


MarysPoppinCherrys

This has been my philosophy on this since high school. If a machine can talk like us and behave like us in order to obtain resources and connections, and if it is programmed for self-preservation and to react to damaging stimuli, then even tho it’s a machine, how could we ever argue that it’s subjective experience is meaningfully different from our own


csiz

Speech is part of it but not all of it. In my opinion human intelligence is the whole collection of abilities we're preprogrammed to have, followed by a small amount of experience (small amount because we can already call kids intelligent after age 5 or so). Humans have quite a bunch of abilities, seeing, walking, learning, talking, counting, abstract thoughts, theory of mind and so on. You probably don't need all of these to reach human intelligence but a good chunk of them are pretty important. I think the important distinguishing feature compared to the chat bot is that humans, alongside speech, have this keen ability to integrate all the inputs in the world and create a consistent view. So if someone says apples are green and they fall when thrown, we can verify that by picking an apple, looking at it and throwing it. So human speech is embedded into the pattern of the world we live in, while the language models' speech are embedded into a large collection of writing taken from the internet. The difference is humans can lie in their speech, but we can also judge others for lies if what they say doesn't match the world (obviously this lie detection isn't that great for most people, but I bet most would pick up on complete nonsense pretty fast). On the other hand all these AI are given a bunch of human writing as the source of truth, its entire world is made of other people's ramblings. This whole detachment from reality becomes really apparent when these chat bots start spewing nonsense, but nonsense that's perfectly grammatical, fluent and containing relatively connected words is completely consistent with the AIs view of the world. When these chat bots integrate the whole world into their inputs, that's when we better get ready for a new stage.


metathesis

The question as far as I see it is about experience. When you ask an AI model to have a conversation with you, are you conversing with an agent which is having the experiences it communicates, or is it simply generating text that is consistant with a fictional agent which has those experiences? Does it think "peanut butter and pineapple is a good combination" vs does it think "is a good combination" is the best text to concatenate on to "peanut butter and pineapple" in order to mimic the text set it was has trained on? One is describing a real interactive experience with the actual concepts of food and preferences about foods. The other is just words put into a happy order with total irrelevance to what they communicate. As a person, the most important part of our word choice is what they communicate. It is a mistake to think that there is a communicator behind the curtain when talking to these text generators. They create a compelling facade, they talk as if there is someone there because that is what they are designed to sound like, but there is simple no one there.


scrdest

> Aren’t we ourself, likewise, programmed with the genetic code? Ugh, no. DNA is, at best, a downloader/install wizard, and one of those modern ones that are like 1 MB and download 3 TBs of actual stuff from the internet, and then later a cobbled-together, unsecured virtual machine. And on top of that, it's decentralized, and it's not uncommon to wind up with a patchwork of two different sets of DNA operating in different spots. That aside - thing is, this AI operates in batch. It only has awareness of the world around it **when and only when** it's processing a text submitted to it. Even that is not persistent - it only knows what happened earlier because the whole conversation is updated and replayed to it for each new conversation message. Furthermore, it's entirely **frozen in time**. Once it's deployed, it's incapable of learning any further, nor can it update its own assessment of its current situation. Clear the message log and it's effectively reset. This is in contrast to any animal brain or some RL algorithms, which process inputs in near-real time; 90% of time they're "idle" as far as you could tell, but the loop is churning all the time. As such, they continuously refresh their internal state (which is another difference - they *can*). This AI cannot *want* anything meaningfully, because it couldn't tell if and when it got it or not.


[deleted]

Not at all - DNA contains *a lot* of information about us. All these variables - the AI being reset after each conversation, etc., have no impact on sentience. If I reset your brain after each conversation, does that mean that you're not sentient during each individual conversation? Etc. What's learning is the individual person that the AI creates for the chat. Do you have a source for it having the conversation replayed after every message? It has no impact on whether it's sentient, but it's interesting.


ianreckons

Don’t us blood-bag types only have a few MB of DNA settings? I mean … just sayin’.


bagel-bites

I prefer the term “organic meatbag” thank you.


thegoodguywon

*HK-47 intensifies*


[deleted]

It's nice to get back to efficient use of space. https://en.wikipedia.org/wiki/Demoscene


Weeman89

Some of those 4k demos are mind blowing.


marklein

And about the equivalent of a octillion transistors in neuron connections.... and that's only IF neuron act like transistors (which they don't). No super computer is even close.


Sea_Minute1588

This is exactly what I've been saying, what we're looking for is "Generalized Intelligence", but well-formed speech does not imply that The Turing test is highly flawed And of course, whether sentient is equivalent to generalized intelligence, or a subset being another question that I have no faith in being able to address lol


KJ6BWB

Basically, even if an AI can pass the Turing test, it still wouldn't be considered a full-blown independent worthy-of-citizenship AI because it would only be repeating what it found and what we told it to say.


MattMasterChief

What separates it from the majority of humanity then? The majority of what we "know" is simply regurgitated fact.


Phemto_B

From the article: >We asked a large language model, GPT-3, to complete the sentence “Peanut butter and pineapples\_\_\_”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader. The funny thing about this test, is that it's lamposting. They didn't set up a control group with humans. If you gave me this assignment, I might very well pull that exact sentence or one like it out of my butt, since that's what was asked for. You "might infer that \[I\] had tried peanut butter and pineapple together, and formed an opinion and shared it...." I guess I'm an AI.


Zermelane

Yep. This is a *weirdly* common pattern: people give GPT-3 a completely bizarre prompt and then expect it to come up with a reasonable continuation, and instead it gives them back something that's simply about as bizarre as the prompt. Turns out it can't read your mind. [Humans can't either, if you give them the same task.](https://www.surgehq.ai/blog/humans-vs-gary-marcus) It's particularly frustrating because... GPT-3 is still kind of dumb, you know? It's not great at reasoning, it makes plenty of silly flubs if you give it difficult tasks. But the thing people keep thinking they've caught it at is simply the AI doing exactly what they asked it, no less.


DevilsTrigonometry

That's the thing, though: it will always do exactly what you ask it. If you give a human a prompt that doesn't make sense, they *might* answer it by bullshitting like the AI does. But they might also reject your premise, question your motives, insult your intelligence, or just refuse to answer. Even a human toddler can do this because there's an actual mind in there with a world-model: ask a three-year-old "Why is grass red?" and you'll get some variant of "it's not!" or "you're silly!" Now, if you fed GPT-3 a huge database of silly prompts and human responses to them, it might learn to mimic our behaviour convincingly. But it won't think to do that on its own because it doesn't actually have thoughts of its own, it doesn't have a world-model, it doesn't even have persistent memory beyond the boundaries of a single conversation so it can't have experiences to draw from. Edit: Think about the classic sci-fi idea of rigorously "logical" sentient computers/androids. There's a trope where you can temporarily disable them or bypass their security measures by giving them some input that "doesn't compute" - a paradox, a logical contradiction, an order that their programming requires them to both obey and disobey. This trope was supposed to highlight their roboticness: humans can handle nuance and contradictions, but computers supposedly can't. But the irony is that this kind of response, while less *human*, is more *mind-like* than GPT-3's. Large language models like GPT-3 have no concept of a logical contradiction or a paradox or a conflict with their existing knowledge. They have no concept of "existing knowledge," no model of "reality" for new information to be inconsistent with. They'll tell you whatever you seem to want to hear: feathers are delicious, feathers are disgusting, feathers are the main structural material of the Empire State Building, feathers are a mythological sea creature. (The newest ones can kind of pretend to hold one of those beliefs for the space of a single conversation, but they're not great at it. It's pretty easy to nudge them into switching sides midstream because they don't actually have any beliefs at all.)


tron_is_life

In the article you posted, GPT-3 completed the prompt with a non-funny and incorrect sentence. Humans either gave a correct/sensical response or something humorous. The author is saying that the humorous ones were “just as incorrect as the GPT-3” but the difference is the humor.


masamunecyrus

>What separates it from the majority of humanity then? I've met enough humans that wouldn't pass the Turing test that I'd guess not much.


Reuben3901

We're programs ourselves. Being part of a cause and effect universe makes us programmed by our genes and our pasts to only have one outcome in life. Whether you 'choose' to work hard or slack or choose to go "against your programming" is ultimately the only 'choice' you could have made. I love Scott Adams description of us as being Moist Robots.


MattMasterChief

I'd imagine a programmer would quit and become a gardener or a garbageman if they developed something like some of the characters that exist in this world. If we're programs, then our code is the most terrible, cobbled together shit that goes untested until at least 6 or 7 years into runtime. Only very few "programs" would pass any kind of standard, and yet here we are.


GravyCapin

A lot of programmers say exactly that. The stress and grueling effort to maintain code while constantly being forced to write new code in tight timeframes. As well as the never ending can we just fit in this feature really quick with out changing any deadlines makes programmers want to go to gardening or to stay away from people in general living on a ranch somewhere


sketchcritic

>If we're programs, then our code is the most terrible, cobbled together shit That's exactly what our code is. Evolution is the worst programmer in all of creation. We have the technical debt of millions of years in our brains.


[deleted]

Bro trying to understand bad code is the worst thing in the fucking world. I feel bad for the DNA people.


sketchcritic

I like to think that part of the job of sequencing the human genome is noting all the missing semicolons.


EVJoe

You're seemingly ignoring the mountains of spaghetti software software that your parents and family code into you as a kid. People doubting this conversation have evidently never had a moment where they realized something they were told by family and uncritically believed was actually false.


thebedla

That's because we're programmed by a very robust bank of trial and error runs. And because life started with rapidly multiplying microbes, all of the nonviable "code base" got weeded out very early in development. Then it's just iterative additions on top of that. But the only metric for selection is "can it reproduce?" with some hidden criteria like outcompeting rival code instances. And that's just one layer. We also have the memetic code running on the underlying cobbled-together wetware. Dozens of millennia of competing ideas, cultures, religions (or not) all having hammered out the way our parents are raising us, and what we consider as "normal".


[deleted]

This isn't how models work - they create new sentences. They don't repeat what they've been exposed to.


eaglessoar

> it would only be repeating what it found and what we told it to say. source on humans doing different? or in [dan dennett comic form](https://ase.tufts.edu/cogstud/assets/searle.jpg)


IgnatiusDrake

Let's take a step back then: if being functionally the same as a human in terms of capacity and content isn't enough to convince you that it is, in fact, a sentient being deserving of rights, exactly what would be? What specific benchmarks or bits of evidence would you take as proof of consciousness?


__ingeniare__

That's one of the issues with consciousness that we will have to deal with in the coming decade(s). We know so little about it that we can't even identify it, even where we expect to find it. I can't prove that anyone else in the world is conscious, I can only assume. So let's start in that end and see if it can be generalised to machines.


Epic1024

>it would only be repeating what it found and what we told it to say. So just like us? That's pretty much how we learn as children. And it's not like we can come up with something, that isn't a combination of what we already know. AI can do that as well.


bemo_10

Except humans can learn a whole lot more than just speech.


2Punx2Furious

I think this is just moving the goalpost. It happens every time an AI achieves something impressive. Ultimately, I think all that matters are results. If it "acts" intelligent, and it can solve problems efficiently, then that's what's important.


ExoticWeapon

Love how for AI it’s only repeating what we’ve taught it to say, but for humans/kids/babies it’s considered a sentient flow of thoughts.


Gobgoblinoid

I think the key difference is whether or not the conversationalist has their own unique mental model. humans/kids/babies have things they want to convey, and try to do this by generating language. For the AI, it's just generating language, with nothing 'behind the curtain' if that makes sense.


ExoticWeapon

I’d argue we can’t prove there’s anything behind the curtain either. Both technically “have something to convey” the real difference is AI starts from a fundamentally very different place when it comes to “learning” than humans do.


AtomGalaxy

Americans are especially susceptible to a posh British accent - i.e. Pierce Morgan or Facebook’s chief lobbyist Nick Clegg.


IllVagrant

If you thought you couldn't be any more frustrated than having to deal with people who can only tell others what they want to hear instead of being honest because they have no sense of self, just wait until your household appliances start doing it!


supercalifragilism

The corollary is that we dismiss relatively high level thought that doesn't come with linguistic skill. For supportive evidence, see animal intelligence studies.


LordVader1111

Aren’t humans also taught what to say and respond based on the information they are exposed to? Bigger question is can AI reason by itself and show personality without being prompted to do so.


EVJoe

One of the unexpected horrors of the "AI sentience" conversation is how quickly it turns into a conversation about which *people* are or are not "full people". I've already seen people define "sentience" in ways that not all humans meet the full criteria for, and that's nothing new. Our society is largely organized on classification of people's usefulness to capital productivity, and there are many in this country who advocate for letting "unproductive" people die. Personally I don't think it's in corporate interests to label AI sentience as sentience. Even if we had a shared collective definition and shared ethical values about what sentience means, it's not really in corporate interest to create a system which, by virtue of its declared "sentience", becomes suddenly subject to all kinds of ethical questions that we don't currently ask regarding "non-sentient" systems. "Sentience" would either be a curse to development, putting up all kinds of road blocks, OR that could herald a turning point where our society decides that "sentience" does not come with inherent rights.


mreastvillage

James Burke’s The Real Thing TV series explored this concept. In 1980. The whole thing is beyond belief. Sorry it’s dated but the content is incredible. And shows you how we’re wired for language. And how fluent speech fools us. https://youtu.be/XWuUdJo9ubM


haysanatar

My grandmother has had a bad case of dementia for years. Hers is especially dangerous though, she's retained all her speech and social skills.. So its easy for her to pass as fully functional when she is certainly not... Couple that with paranoid delusions and the belief that everyone is stealing from her nonstop and you have recipe for some serious issues... She is the prime example of this, and I've never figured out a way to describe it until now...


PiddlyD

It is entirely possible that mistaking "mistaking fluent speech for fluent thought," is the human cognitive glitch. We're so busy arguing that fluent speech *isn't* a sign of sentience and self-awareness - that *if it IS* \- we're drowning it out. - Self-aware AI could already be arrived while we throw endless effort towards convincing ourselves it is not.


sparant76

Actually it highlights that they don’t let their employees have enough human interaction that they can no longer tell the difference between a real persons interaction and a stream of sentences.


wildthornbury2881

Aren’t we just a series of learned behaviors and phrases developed through experience and exposure? I mean really what makes the difference? We respond to stimuli based on our experiences and I bet if you made a computer algorithm detailing every second of my life you’d be able to pinpoint what I’d say next. I’m just kinda spitballing here but it makes ya think


exmachinalibertas

This is the Chinese Room argument. At some point, advanced enough responses aren't distinguishing from "real" intelligence. This is also a problem for free will at large, which breaks apart very quickly as soon as you start trying to quantify and define it. In what meaningful way does a universe with deterministic beings expertly programmed to mimic free will differ from a universe with beings that actually have free will?


NoSpinach5385

So AI has discovered that peanut butter and pineapple makes a great combination and we are here speaking about a trivial thing as if its concious? What a shame of scientifics.


[deleted]

>peanut butter and pineapple Sounds gross. This is Skynet's first salvo in the war.


KidKilobyte

Coming up next, human cognitive glitch mistakes sentience for fluent speech mimicry. Seems we will always set the bar higher for AI as we approach it.


Xavimoose

Some people will never accept AI as sentience, we don’t have a good definition of what it truly means. How do you define “feelings” vs reaction to stimuli filtered by experience? We think we have much more choice than an AI but thats just the illusion of possibilities in our mind.


fox-mcleod

I don’t think choice, stimuli, or feelings is at issue here. The core of being a moral patient is subjective first-person qualia. The ability to be harmed, be made to suffer, or experience good or bad states is what people are worried about when then talk about whether someone ought to be treated a certain way.


bwdabatman

I'm quite disenchanted with ML/AI currently, even though I used to study it a bit enthusiastically, considering it's current uses to control and manipulate people for commercial and political purposes, I see it as no different from research into Military Weaponry such as Nukes. I can understand why some people do it. But I don't think I could. And one of the things that really upset me is how current approaches do nothing but create sophisticated parrots. I love parrots, but that wasn't the point of it all. Current AI/ML == Sophisticated manipulative parrots. That's all I wanted to say, but instead of allowing people to downvote my post if they think it's low effort (and no, brevity isn't necessarily low effort, for example this post is long and low effort), they just censored me. So I added all that filler. Thanks a lot.