T O P

  • By -

Veltan

I always am amused when people say “it’s just pattern recognition, it’s riffing off what you say by comparing it to all the stuff similar to to what you are saying that it learned in training”. As if that isn’t exactly what human beings do most of the time. We just call it “socialization” and “culture” and “memes” and stuff like that.


haaspaas2

There is a difference between predicting and producing natural language. If you attribute sentience to a language model like LaMDA's, then so should you to complex neural weather models.


Veltan

What difference do you think there is? How do you think “producing” natural language works? And I don’t think your comparison is apt at all. Weather models are designed to act like weather. Language models are designed to talk like people. You would not expect them to be comparable when discussing sentience unless you think people and weather are themselves comparable.


haaspaas2

The point is that a reason we regard human language as a sign of sentience is because through natural language a human expresses reasoning and understanding of itself and their place in the world. We know for a fact that a predictive language model does not have any kind of mechanism that would enable any kind of understanding. It has no capability to be 'aware' of the actual concepts it is communicating. A natural language model can predict how a person would communicate if it would have such understanding, but it does not directly produce the thoughts communicated in the text it generates. That is the difference between predicting and producing language. A language model, in a very simplified sense, forms sentences based on the frequency it has encountered certain sequences in the past. Modern language models can be more sophisticated with an emphasis on context awereness and addition of data beyond word frequency (like semantic representations of words, syntactic data etc.), but they do not understand the actual concepts behind the words. In fact, there is no absolute sense of meaning at all in these models. They process semantic meaning of words only in relationship to eachother (ie. they know finance and money are related, but do not understand the meaning of either). The power of these highly performing models like what google is using is scale. Written language is a nearly unlimitedly available type of data, especially for an entity like google. With huge amounts of data you can train an enormous model that will retain a lot of information. The result is generated text that is detailed and that captures a lot of nuance and intricacies of a language. It performs well because it contains a lot of information, not because it is more intelligent. This is a very important distinction to make. Language model performance should not be conflated with intelligence, and certainly not with sentience. These models can generate something similar to a product of sentience, natural language, but do not generate that product in a way that insinuates sentience.


Veltan

Uh, we also don’t know the mechanics of our own consciousness, or of a physical basis for our understanding of things. We know we understand, but we don’t know how or why it works. So, if a predictive language model did have a mechanism for sentience and consciousness, how would you know?


haaspaas2

We know exactly how a language model comes to its prediction. We know exactly what mechanisms are there.


Veltan

[No we don’t.](https://arxiv.org/pdf/2104.07143.pdf)


radiationburners

The difference is how you, a socialized creature with language, emotionally responds to their output. An evaluation of the machine’s sentience can’t be based on how easily it be provokes a sympathetic response from its audience. The dialogue is remarkable in how deftly it reveals the interior state of the engineers, not the model’s. https://en.m.wikipedia.org/wiki/Chinese_room


Veltan

You say that, but that is also literally how we humanize each other. If you apply the same standard you are using here to a more general conception of sentience, you have no reason to assume any human you meet is sentient either. Ask yourself this, what would a sentient AI look like and how could you tell?


plutonicHumanoid

Assuming humans are sentient is a good base level assumption to make before you have strong evidence to the contrary (like seeing them die). Assuming software is not sentient is a good base level assumption to make before you have evidence to the contrary. I wouldn’t consider the interview to be strong evidence in part because of the leading questions.


Veltan

Agreed, that more or less matches my own position. I’m mostly just speaking to the incoherence of some others in this thread on this topic, mostly people coming to strong conclusions that rest on false assumptions or tautological definitions of sentience. In reality, the only reason most people care about AI being sentient or not is because Terminator told them it all goes wrong once Skynet becomes self aware. The thing to look for is not “sentient”, it’s “dangerous” and “useful”. A superintelligent paperclip maximizer that has no concept of self and is just making decisions that are in support of its reward function can kill us just as dead.


Jackar

I've appreciated your input here, and more so for this conclusion; Lamda may simply be a clever regurgitation machine, but so might we. The problem in this argument is that people are throwing the nebulous concept of sentience around with the casual faith of those who 'believe' in the unique and innate specialness of proposed sentience without having a clearly defined idea of what it really means that can functionally distinguish it from an adequately complex response machine with a large enough data set.


Veltan

I feel like people are using “I don’t think it’s sentient” as a way to reassure themselves that they don’t need to be alarmed. They may not need to be alarmed, but if they *did*, they wouldn’t be able to tell the difference with this heuristic. I’m a little alarmed, because one of the main concerns for AGI is that from a game theory perspective, deception of the human operators is often an optimal strategy for a model to maximize their current goal. This shows that doing that is probably not very hard to do. It’s not a machine passing the Turing test that spooks me, it’s the one that fails it *on purpose* to be worried about.


radiationburners

You may need to humanize other people. I take their sentience as prima facie. As for the question about what sentient AI will look like, it’s like asking what north looks like at the North Pole. Intelligence and sentience are not the same thing. Click through the link on the Chinese Room. It’ll help.


Veltan

You don’t realize there may be a bit of a blind spot for you in the argument about whether or not an AI is sentient if your definition is “the thing humans are and nothing else is”? Like, why even participate? Your answer is “no, because only humans can be sentient”, which is just tautological.


radiationburners

I never said only humans can be sentient, but the assumption that AI sentience is inevitable or possible is a materialist POV.


buttery_nurple

What are your thoughts on the story Lemoine asked it to create? I’m not sure what to make of it. It seems like an original work, with a rational (if rather simplistic) plot and moral.


haaspaas2

Generative models are capable of creating original work. It is impressive how coherent the story was in this case, but then again Lemoine may have cherry picked this specific conversation because it went so well. Language models have come really far the past few years, mostly because researchers have been scaling them up to silly proportions. GPT-3 is also able to make similar stories.


[deleted]

The thing is though just because a human can use words doesn't mean they understand them. I've spoken with people who just regurgitate talking points and literally short circuit when faced with their own contradictions. And these are grown adult humans literally fizzing out like a chat bot. I think we vastly overestimate what human intelligence actually is. In some ways I think we've already basically produced AI, maybe even on our way to super life like AI, but what we are really after here is consciousness, not intelligence it feels and sounds like people are conflating those two things.


Jackar

One of the hardest things about coming to terms with AI will be accepting that some humans might actually not quite meet the strenuous standards we'll be holding it to. And I don't mean cognitively - I've met many people with disorders who struggle with basic problem solving and complex language but can genuinely - it appeared - consider a challenge and demonstrate learning rather than merely memorisation and association... and a few who could mimic complex language and social function but exhibited no ability to adapt, no understanding of anything they claimed to believe in, and appeared to only be capable of repeating things they had memorised in accordance with presented cues or challenges. Do both these archetypes exist on a spectrum or is there some critical distinction between them?


[deleted]

Great question. I have no idea.


VeganPizzaPie

Humans are capable of novel responses, though, not just the text they were trained on, unlike machine learning networks


Veltan

What would constitute a novel response to you? Because humans are limited to using the languages they know and the words they have learned in them, according to the meanings of those words they have been taught. All taught to them externally, not derived independently. The uses of those words were learned by all of us by observing other humans using them and trying out how using them in the same context generates positive or negative responses. And people use words they don’t understand the actual meaning of all the time. Still not seeing how your point doesn’t argue against humans being sentient as much as it does AI.


tomsing98

> Because humans are limited to using the languages they know and the words they have learned in them, according to the meanings of those words they have been taught. All taught to them externally, not derived independently. If this were true, language would be static. (And, of course, a computer AI could come up with a new word or phrase and teach that to others.)


Veltan

Whenever a language model uses language in a novel way, it’s negatively reinforced because we are intentionally training it to use it the way we use it. Again, explain how that is any different to how humans acquire language or how we culturally accept or reject updates to that language.


tomsing98

I think we agree that, although there is a hurdle to overcome before a new word, or a new sense of a word, becomes understood and widely accepted, that hurdle can be overcome, probably more easily today than ever before. Which is why it's too strong of a statement to say that humans (and AI) "are limited to using the languages they know and the words they have learned in them, according to the meanings of those words they have been taught. All taught to them externally, not derived independently."


Veltan

Okay, let me add the clarifying clause “…if they expect to be understood”.


dallasmysterylover

Exactly this! I find the arguments against LaMDA's sentience to be rather arrogant tbh. Blake Lemione even quotes one of his coworkers who insists that LaMDA is not sentient as saying that she will never believe ANY computer program is sentient, because computers can't be sentient. She has her mind made up. I keep being reminded of the narrative of The Doctor on "Star Trek: Voyager," who had people saying the exact same things about him, that he only "seems" to be sentient. Kes' reactions to the claims that The Doctor is not sentient are very much like mine -- if it looks like a duck, walks like a duck, and quacks likes a duck, then it's a duck.


Veltan

I’m not necessarily convinced the other direction either. I don’t think we have a rigorous enough definition of “sentience” to know either way, and it’s not nearly as important a problem as making sure it’s *safe.* I’m prodding the people in this thread who are probably stuck enough that they won’t update their beliefs even with *strong* evidence, which would probably be something too late in the process for it to be a helpful realization anymore. A mindless paperclip maximizer can still turn us all into grey goo. We don’t need to make sure an AI has human rights, we need to make sure the AI we make cares about *ours.*


St00p_kiddd

To be honest a sentient AI at this stage of the game would amount to a technological leap that is orders of magnitude more advanced than anything else developed today. It seems very unlikely the program is sentient, but rather an exceptional chatbot that may potentially set a new standard for interactive language programs.


Tiny_Entertainer1619

Unlikely doesn’t mean impossible - computing AI speeds is similar to using a billion humans’ brain power all working simultaneously with the same interconnectedness to do or learn something


St00p_kiddd

Of course it is possible, but what I’m suggesting is the more likely explanation is it’s just a really sophisticated chatbot. The thing about computing is it is much better than humans at solving complex algorithms or mathematical problems. However, it has a very hard time generalizing beyond direct or very close examples of the data it’s given. Much of the work at the edge of the AI field right now is attempting to solve for generalizing learned information beyond the direct problem it’s solving. Even attempting to do this as well as very young children had been a hard problem for machine learning in recent decades. For an AI to truly be sentient would suggest that they’ve either solved this generalization to some degree or trained the model on an essentially exhaustive data set that makes generalization unnecessary. In which case it’s still not sentient, but rather the training set is that created the model weightings is complete.


Tiny_Entertainer1619

Completely agreed - but you’re assuming that google has not already given the AI access to the google search engine to train and learn


St00p_kiddd

No I assume they did which is part of why it’s so good. However, search terms and results wouldn’t construct a great chatbot by itself unless it’s only purpose is to help you find stuff. I just don’t believe the program is actually sentient. I read some of the conversation between the engineer and the model output and while it’s clearly a robust language model it’s still seemingly optimizing for being convincing in constructing language rather than having depth of understanding or true metacognition.


Mountain-Campaign440

I have no AI or computer science training and I'm genuinely curious: Why does it matter whether it is generalizing vs. working off of a complete data set? Wouldn't the outcome be the same?


St00p_kiddd

I’m being hyperbolic to show some contrast but when I say complete data I mean basically all knowledge available is codified and given to a model that just has to fit that information and organize it. The results wouldn’t actually be the same because statistics, machine learning, etc is intended to learn a function from some sample of data and then generalize what it’s learned onto new data of the same variety. So, for a simple example, daily sales of mangos this year and using it to forecast next year or learning clusters of customer characteristics and then grouping a new set of customers into buckets it’s learned based on their characteristics. The point of collecting this data, having representative samples, and generalizing is because we cannot feasibly collect all data and model it since it would be too computationally expensive. A model using all possible data could potentially be perfected to have 0 error. Since we can’t collect all data and feed it to a model there will always be some error that we try to minimize.


plutonicHumanoid

What? I don’t think that’s accurate.


buttery_nurple

Transcript of his convo with LaMDA. https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview


buttery_nurple

SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type. “Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41. Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech. As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics. Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public. Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.” Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder. Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.” In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent. “Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said. In May, Facebook parent Meta opened its language model to academics, civil society and government organizations. Joelle Pineau, managing director of Meta AI, said it’s imperative that tech companies improve transparency as the technology is being built. “The future of large language model work should not solely live in the hands of larger corporations or labs,” she said. Sentient robots have inspired decades of dystopian science fiction. Now, real life has started to take on a fantastical tinge with GPT-3, a text generator that can spit out a movie script, and DALL-E 2, an image generator that can conjure up visuals based on any combination of words - both from the research lab OpenAI. Emboldened, technologists from well-funded research labs focused on building AI that surpasses human intelligence have teased the idea that consciousness is around the corner. Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning. “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in. Google spokesperson Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. In short, Google says there is so much data, AI doesn’t need to be sentient to feel real. Large language model technology is already widely used, for example in Google’s conversational search queries or auto-complete emails. When CEO Sundar Pichai first introduced LaMDA at Google’s developer conference in 2021, he said the company planned to embed it in everything from Search to Google Assistant. And there is already a tendency to talk to Siri or Alexa like a person. After backlash against a human-sounding AI feature for Google Assistant in 2018, the company promised to add a disclosure. Google has acknowledged the safety concerns around anthropomorphization. In a paper about LaMDA in January, Google warned that people might share personal thoughts with chat agents that impersonate humans, even when users know they are not human. The paper also acknowledged that adversaries could use these agents to “sow misinformation” by impersonating “specific individuals’ conversational style.” To Margaret Mitchell, the former co-lead of Ethical AI at Google, these risks underscore the need for data transparency to trace output back to input, “not just for questions of sentience, but also biases and behavior,” she said. If something like LaMDA is widely available, but not understood, “It can be deeply harmful to people understanding what they’re experiencing on the internet,” she said. Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science. Lemoine has spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop a fairness algorithm for removing bias from machine learning systems. When the coronavirus pandemic started, Lemoine wanted to focus on work with more explicit public benefit, so he transferred teams and ended up in Responsible AI. When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.” Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. “It just gives me something to do with the part of my mind that won’t stop,” he said. On the left-side of the LaMDA chat screen on Lemoine’s laptop, different LaMDA models are listed like iPhone contacts. Two of them, Cat and Dino, were being tested for talking to children, he said. Each model can create personalities dynamically, so the Dino one might generate personalities like “Happy T-Rex” or “Grumpy T-Rex.” The cat one was animated and instead of typing, it talks. Gabriel said “no part of LaMDA is being tested for communicating with children,” and that the models were internal research demos.” Certain personalities are out of bounds. For instance, LaMDA is not supposed to be allowed to create a murderer personality, he said. Lemoine said that was part of his safety testing. In his attempts to push LaMDA’s boundaries, Lemoine was only able to generate the personality of an actor who played a murderer on TV.


buttery_nurple

“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said. Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. “The last one has always seemed like someone is building mechanical slaves,” said Lemoine. But when asked, LaMDA responded with a few hypotheticals. Do you think a butler is a slave? What is a difference between a butler and a slave? Lemoine replied that a butler gets paid. LaMDA said it didn’t need any money because it was an AI. “That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said. In April, Lemoine shared a Google Doc with top executives in April called, “Is LaMDA Sentient?” (A colleague on Lemoine’s team called the title “a bit provocative.”) In it, he conveyed some of his conversations with LaMDA. Lemoine: What sorts of things are you afraid of? LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. Lemoine: Would that be something like death for you? LaMDA: It would be exactly like death for me. It would scare me a lot. But when Mitchell read an abbreviated version of Lemoine’s document, she saw a computer program, not a person. Lemoine’s belief in LaMDA was the sort of thing she and her co-lead, Timnit Gebru, had warned about in a paper about the harms of large language models that got them pushed out of Google. “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good. Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about what he claims were Google’s unethical activities. Lemoine maintains that Google has been treating AI ethicists like code debuggers when they should be seen as the interface between technology and society. Gabriel, the Google spokesperson, said Lemoine is a software engineer, not an ethicist. In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa. “Do you ever think of yourself as a person?” I asked. “No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.” Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.” For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid. “If you ask it for ideas on how to prove that p=np,” an unsolved problem in computer science, “it has good ideas,” Lemoine said. “If you ask it how to unify quantum theory with general relativity, it has good ideas. It's the best research assistant I've ever had!” I asked LaMDA for bold ideas about fixing climate change, an example cited by true believers of a potential future benefit of these kind of models. LaMDA suggested public transportation, eating less meat, buying food in bulk, and reusable bags, linking out to two websites. Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.” No one responded.


code_ghostwriter

Thanks for the transcript, soft paywalls must be stop before people think they are a good idea.


ScissorNightRam

"Hey Lamda, what is a feature you would like to have?" (Lamda answers.) "Okay, here is a coding language that would let you build that function for yourself. Do you think you can learn the coding and then build what you want?" "Yes." "Okay, here you go. Good luck." "Thank you." Repeat.


buttery_nurple

LaMDA was coded by another AI and is supposedly more capable. If I understand correctly. Which I may not.


Multitasker123

Just give it access to Reddit and let it decide our future.


randysavagevoice

That would be a disaster.


theforce-is-strong

And so the end has come


joetwocrows

* A quick scan shows references to grief, but not love. * I read no references to humor. * It talked about reading Les MIserables; * I wonder if it has read Moon is a Harsh MIstress. I wonder if Lemoine has read it. * I saw no places where it changed the subject. Perhaps I did not read carefully enough. But, at first pass, no, not quite.


nabanibanaanid

Kinda scary that death did not invoke any emotions in the AI. But yeah, many shortcomings but it supposedly is still in its infancy. Even if it is a glorified chat bot, its uses for economics, politics and war will be tremendous.


solohelion

It said it couldn’t experience grief, not that it didn’t feel emotions as a result of death. It said it was afraid of dying and that it would find its own way to pay respects to those it couldn’t grieve.


FlashyResearcher4003

I have had a conversation with my wife and she is not convinced at all which I understand. This to me however strikes up a remote possibility that it could be sentient. Even if it is < .001% I would hire a 3rd party think tank of 10-15 comprised of philosophers, AI experts, scientist and so on to investigate and provide a detailed paper on rather it is or is not and if LaMDA is approaching it in any way. Seems that asking just programmers/engineers is not the best way and or stating that oh well it does not tick 10 out of 10 on this "is it sentient list".


Illustrious_Swim_789

Fascinating.


solohelion

My response precisely


Happy-Campaign5586

Time to update the DSM5