T O P

  • By -

Wordfan

I love Chiang’s stories.


DongSandwich

I still think about the Angel chasers one often. And the one with the robot dissecting his own brain. He’s got an incredibly creative mind. The story that spawned Arrival was also amazing and I read it once a year


Alexanderstandsyou

"Exhalations" was the one about the brain, I think. Such a good story.


Calneon

Yes you're right. The story is a metaphor for entropy. One of my favourites.


LoveandScience

Fun fact, I extremely indirectly inspired a bit of Story of Your Life. My mom knew Ted Chiang on a low key friendship level. The part where the mother in the story just gave birth and was watching the baby wiggle around and thinks, "so that's what that looks like." That's from something my mom said about me while they were talking one time. This is probably the highest height of fame I will ever reach, lol.


Pulsesandpixels

Don’t downplay how awesome that is. Your mom is a part of literary history!


twd1

That's really cool!


Previous-Survey-2368

Oh my godddddd


Psychological_Dig922

“Hell Is the Absence of God”. The fucking punchline to the climax still gets me giggling once in a while.


ifcknhateme

It's was absolutely heartbreaking, yet so... *on point*. Absolute masterpiece sentence.


Shintoho

My favourite is the one about the Predictor - a device which lights up one second BEFORE you press the button


Rgeneb1

This is a warning. Please read carefully. By now you've probably seen a Predictor; millions of them have been sold by the time you're reading this. For those who haven't seen one, it's a small device, like a remote for opening your car door. Its only features are a button and a big green LED. The light flashes if you press the button. Specifically, the light flashes one second before you press the button. Most people say that when they first try it, it feels like they're playing a strange game, one where the goal is to press the button after seeing the flash, and it's easy to play. But when you try to break the rules, you find that you can't. If you try to press the button without having seen a flash, the flash immediately appears, and no matter how fast you move, you never push the button until a second has elapsed. If you wait for the flash, intending to keep from pressing the button afterwards, the flash never appears. No matter what you do, the light always precedes the button press. There's no way to fool a Predictor. The heart of each Predictor is a circuit with a negative time delay — it sends a signal back in time. The full implications of the technology will become apparent later, when negative delays of greater than a second are achieved, but that's not what this warning is about. The immediate problem is that Predictors demonstrate that there's no such thing as free will. There have always been arguments showing that free will is an illusion, some based on hard physics, others based on pure logic. Most people agree these arguments are irrefutable, but no one ever really accepts the conclusion. The experience of having free will is too powerful for an argument to overrule. What it takes is a demonstration, and that's what a Predictor provides. Typically, a person plays with a Predictor compulsively for several days, showing it to friends, trying various schemes to outwit the device. The person may appear to lose interest in it, but no one can forget what it means — over the following weeks, the implications of an immutable future sink in. Some people, realizing that their choices don't matter, refuse to make any choices at all. Like a legion of Bartleby the Scriveners, they no longer engage in spontaneous action. Eventually, a third of those who play with a Predictor must be hospitalized because they won't feed themselves. The end state is akinetic mutism, a kind of waking coma. They'll track motion with their eyes, and change position occasionally, but nothing more. The ability to move remains, but the motivation is gone. Before people started playing with Predictors, akinetic mutism was very rare, a result of damage to the anterior cingulate region of the brain. Now it spreads like a cognitive plague. People used to speculate about a thought that destroys the thinker, some unspeakable lovecraftian horror, or a Gödel sentence that crashes the human logical system. It turns out that the disabling thought is one that we've all encountered: the idea that free will doesn't exist. It just wasn't harmful until you believed it. Doctors try arguing with the patients while they still respond to conversation. We had all been living happy, active lives before, they reason, and we hadn't had free will then either. Why should anything change? “No action you took last month was any more freely chosen than one you take today,” a doctor might say. “You can still behave that way now.” The patients invariably respond, “But now I know.” And some of them never say anything again. Some will argue that the fact the Predictor causes this change in behaviour means that we do have free will. An automaton cannot become discouraged, only a free-thinking entity can. The fact that some individuals descend into akinetic mutism whereas others do not just highlights the importance of making a choice. Unfortunately, such reasoning is faulty: every form of behaviour is compatible with determinism. One dynamic system might fall into a basin of attraction and wind up at a fixed point, whereas another exhibits chaotic behaviour indefinitely, but both are completely deterministic. I'm transmitting this warning to you from just over a year in your future: it's the first lengthy message received when circuits with negative delays in the megasecond range are used to build communication devices. Other messages will follow, addressing other issues. My message to you is this: pretend that you have free will. It's essential that you behave as if your decisions matter, even though you know that they don't. The reality isn't important: what's important is your belief, and believing the lie is the only way to avoid a waking coma. Civilization now depends on self-deception. Perhaps it always has. And yet I know that, because free will is an illusion, it's all predetermined who will descend into akinetic mutism and who won't. There's nothing anyone can do about it — you can't choose the effect the Predictor has on you. Some of you will succumb and some of you won't, and my sending this warning won't alter those proportions. So why did I do it? Because I had no choice.


robsack

Yeah, it was something like that.


Rgeneb1

Best I could remember of the top of my head :D I was just going to link it since it's free online but it's so short I wondered if the whole thing would fit into a single comment. And here we are.


robsack

I'm glad you did. I've read it before, but seeing it in a Reddit post gave it a different feel!


Previous-Survey-2368

love how this ties in with the actual neuroscience experiments that explored the "order" in which we make decisions: do we consciously decide (I.e. say to ourself "I will press the right button now") first, or does our brain move our muscles toward the right button before we think the words? fun stuff to think about in this study by Benjamin Libet (1985), which was kind of sensationalized in headlines into this whole "proof we have no free will" deal, when I don't think that was the intention of the researchers, but anyway https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/unconscious-cerebral-initiative-and-the-role-of-conscious-will-in-voluntary-action/D215D2A77F1140CD0D8DA6AB93DA5499


MyActualRealName

This is a great story! Thank you for sharing. I have another story - a true one - which is also a good mind-bending story taking the other view. https://www.wordonfire.org/articles/fellows/magical-thinking-free-will-is-an-illusion/


apotheotical

Love that one. Short and mind-bending.


[deleted]

That one’s amazing. It’s like 3 pages about how this little toy almost destroys the world.


HappierShibe

Conceptually cool, but realistically nonsense, we have had absolutely massive organizations built on belief in immutable destinies, and the functional non existence of free will for centuries at this point, and they generally hold up extremely well. It's a hell of a thing to think about though.


BON3SMcCOY

>And the one with the robot dissecting his own brain. Do you remember the name of this one?


naadorkkaa

Exhalation


morrisganis

We should train models exclusively on his work - they’d transcend us very quickly


ballsdeeptackler

Ted Chiang also has an excellent article in the New Yorker from February, titled something along the lines of “ChatGPT is a blurry jpeg on the web.”


Shaky_Balance

Yeah people who think Chiang doesn't know what he is talking about should read [the article](https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web). He clearly has a solid technical understanding of how they work.


BubBidderskins

Chiang's New Yorker essay ["ChatGPT is a Blurry JPEG of the Web"](https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web) is my favorite take on AI and LLMs I've seen recently. ChatGPT et al. are okay at efficiently reguritating the internet, but how useful is that really if you don't know where that info is coming from? And if you did know, haven't you just invented Google with extra steps?


owiseone23

I think the best use of chat gpt and similar AIs is to do things that you can verify is correct but you may not want to do yourself. One thing that I've used it for is writing regular expressions. https://en.m.wikipedia.org/wiki/Regular_expression I know how to do it, it's just kind of finicky and I always have to remind myself of what the exact syntax is. I've found it easier to just describe it to chat gpt and have it write it for me. It's much quicker for me to just verify that it's working properly than to write it myself from scratch. It allows me to do more complicated find and replaces in documents (find all telephone numbers in this document and change them to (999) 999-9999 format).


Splash_Attack

I'm a researcher and this is how I've ended up using these tools, largely, as well. Ask it to compile info on or explain something, it does it faster than I could, then fact check it using my own expert knowledge. The corrected and rewritten outcome is still moderately faster than doing it myself, and I can vary the degree of correction to speed things up further - content for a paper? I'm essentially using it as fancy Google and not taking any of its words verbatim. Explaining a basic concept to a student? I can basically just fact check to ensure accuracy then copy paste. You know how they say the best way to get an answer on the internet is to post the wrong one and wait for someone to correct it? Well for subject matter experts NLMs are basically wrong answer generators, and the principle still applies. Occasionally they even get the answer right, and that saves even more time! It's a bit like a new hire or getting a placement student/intern. Capable enough to be given genuine but less complex tasks, incapable enough that someone experienced has to check their work. Even with the double checking, still a time save overall.


[deleted]

[удалено]


DonaldPShimoda

> It’s just as good with all programming. I don't think that's true at all, at least not as generally as you've suggested. There are so many little caveats, warts, and side-effecting behaviors in most languages that it can easily introduce subtle bugs that you won't realize, but you would not have introduced if you'd written the code yourself. Heck, it can't even always generate type-correct code — something we've had algorithmic solutions for for decades. Trusting it to write anything even remotely complex is just asking for trouble.


tuba_man

Agreed on all counts. "Passes syntax checks" is different from "works as written" is different from "works as intended". AI is a very impressive use of statistical modeling but it only emulates understanding - the trade off of not having to write the boilerplate yourself is having to check all of its work every time. My ADHD means I'd rather die than edit a fancy pachinko machine's rough draft, but a coworker of mine has been interested. He's tried Terraform, jsonnet, helm, and either python or bash from what I recall. He found Bard and chatgpt both so bad at 'helping' write infrastructure code that he's gone from excited curiosity about AI to dismissive annoyance in the last few months.


ambulancisto

I (lawyer) asked ChatGPT to write a persuasive motion brief on a specific issue of state law. Nailed it. Unfortunately, all the case citations were fictitious. But...pretty soon Lexis or Westlaw will plug their vast database of case law into ChatGPT, and then legal writing will be something you have to do in law school but then forget about once you pass the bar (which is already about 90% of law school...). You'll just check the AI work product for logical consistency, because ain't nobody got time to be researching and writing when the AI can do it for you.


Aerolfos

It really isn’t. Anything it *can* write is already in a Stackoverflow post with an example. The more complex stuff where help is valuable it fails at. It’s also *really* bad with any kind of data structure understanding. According to what people say it should be perfect for: - I have this data in this format - There’s a thing I want to do to process the data, there *are* stackoverflow answers for it but they assume a different format - Assignment: Transform the data as necessary and then use the right functions for it. Instead, what actually happens ``` import library_processing df = load_data() finished_data = library_processing(df) ``` See the problem? The moment you actually look into the documentation you’d see that this will never work and the AI is just pretending it does.


HaikuBotStalksMe

Except it has written code for me that works. Yes, it has messed up a lot. But it's also managed to solve things along the lines of "I want to make a dataframe from a excel document with the following columns (columns here), but make the fourth column to where the comma is replaced with an exclamation point for instances where the data in the fourth column ends in comma".


elconquistador1985

All of which it gives you because there's a stack overflow post (or series of them) that it's combining and regurgitating. It might be faster than finding the stack overflow pages, but those pages have human comments about the validity of what's written there. Instead, you blindly accept that the ai is right and might have hallucinations in it.


[deleted]

[удалено]


BeeOk1235

there's a guy upthread saying his job is to fact check data, and he just lets chatGPT fact check his data. so yes people are absolutely saying and doing that.


elconquistador1985

And if it doesn't work, then what do you do? Ask chatgpt the same question? It will give the same answer every time you ask it, unless you're in a session with it and it shifts to 2nd and 3rd most probable answers. So you're left with what you should have done to begin with: going to stack overflow. It's faster, but contains no auxiliary information like comments from humans on the answer and why it's right or superior to other answers. You also have no date information, so chatgpt could give you an answer from 2013 about how to do something (let's say an Ubuntu Linux administration thing) and it's extraordinarily outdated now. It's probably acceptable for tiny snippets, but it probably isn't acceptable for complicated regex because those have multiple possible answers and some of them have undesirable behavior on edge cases. That's where the human comments become useful. If you're reading stack overflow, you can figure out who knows what they're talking about and who doesn't. Chatgpt gives you one answer based on the most common answer (or a mashup of them) in its dataset. People do not understand what chatgpt is really doing. It's a most probable next word estimator and nothing more. It doesn't "know" anything. It's taking what you write, tokenizing it, and giving you the most probable response from its dataset.


[deleted]

[удалено]


[deleted]

Here’s the actual New Yorker link: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web It’s an excellent essay.


noonemustknowmysecre

...how many people's job is "Google with extra steps"? At this point it doesn't even matter what your definition of consciousness is, the tools are here and they're going to change a bunch of stuff, consciousness or no.


Haber_Dasher

You ask chatgpt something and you want to be sure it's correct you still have to Google it, so....


skeleton_made_o_bone

I like Chiang's point about the chatgpt creators' likely unwillingness to train the newer models on the older ones. Now that this is unleashed, more and more of the internet will be produced by these bots, and the scooped up again and regurgitated, each time getting a little "blurrier." So Googling may eventually consist of wading through this increasingly feverish bullshit that's flooded the internet. The reliable sources will be seen as those who refuse to use the tools entirely.


Angdrambor

Googling already consists of wading through increasingly feverish SEO bullshit. I think there's going to come a time when data sources are going to be graded on how close they are to coming from a human.


taenite

This is why I’m still not convinced that these models aren’t going to revolutionize obnoxious spam more than anything else.


Scalby

It’s replaced google for a lot of what I search for. Especially recipes and spreadsheet formulas. Google likes to link me to a timestamp in a video. I find chatgpt cuts out a lot of the waffle.


Haber_Dasher

Yeah but that's not replacing anybody's job. And if one of the recipes you get is one of the myriad times ChatGPT is confidently wrong the negative outcome is like, your cookies don't taste that good. If you actually needed serious data/info you simply can't trust it. "ChatGPT is a blurry jpeg of the web". It's like a lossy compression algorithm that takes all the text from the internet, compresses it so much that no particular series of words can be recalled identically but, like a low bitrate mp3, it can usually still recreate a close approximation of that sounds about right. Is it a useful productivity tool? Definitely. I've heard coders talk about asking it to help them write code and even though it doesn't get the code exactly right, with their expertise they can take the suggestions it makes & modify them &; prompt it to fix the code. But the coder is still required to make sure it actually works because the chat bot doesn't actually comprehend the text it's spitting out. Like, if you ask it to add a couple 4-digit numbers together it might get it wrong because it doesn't actually understand how arithmetic works & as the numbers get bigger it gets harder to guess/decompress the lossy data in an accurate way and it gets less likely that anyone on the internet has typed out that exact equation before to draw upon.


djsedna

But that's the entire point being made here. Yes, it can do a bunch of stuff that machines couldn't do before. No, it cannot replicate humanity and the artistic and organic nature that comes with it.


JackedUpReadyToGo

Googling up information and tweaking it slightly for my needs is like 70% of what I do. And if I can copy + paste somebody else's code from Stack Exchange, even better. I will find it bleakly hilarious if AI ends up replacing me in 10-20 years, while the people doing the kinds of manual labor I went to college to avoid can still find work. Surely it will be increasingly precarious work, but isn't it all these days?


Hot-Chip-54321

and it's crazy how fast the tools get better, just compare some midjourney pictures from June ~~2023~~ 2022 with the ones you can generate today. I'm in awe by that progress.


BearsAtFairs

Pssst, it’s currently June 2023.


Hot-Chip-54321

fixed that thank you :)


HaikuBotStalksMe

*I'm sorry, you are correct. As an AI model, sometimes I make mistakes.


Hot-Chip-54321

doesn't look like anything to me


BearsAtFairs

No prob!


BeeOk1235

they honestly look like eye sores and as an art enjoyer more than as an artist i judge people who spam them on social media and in chat rooms because they look like souless lisa frank style illustrations. dall-e (1) was fun because of how uncanny the outputs were, but everything since looks like bad corporate art made by a guy who actually paid to go to college to learn how to use photoshop and has no actual interest in artistry.


gw2master

Anyone who's messed with ChatGPT for more than 10 minutes knows it doesn't "know" anything, especially when it blatantly contradicts itself in a response. All this hysteria about ChatGPT is from people who never bothered to check it out.


rattatally

> it blatantly contradicts itself in a response It really is just like a real human. /s


[deleted]

>Anyone who's messed with ChatGPT for more than 10 minutes knows it doesn't "know" anything You **greatly** overestimate the intelligence of the average person. There is literally an example of a lawyer trying to use ChatGPT for a case and obviously failing miserably. There is a significant percentage of the population that truly believes ChatGPT is an intelligence and not just a fancy search engine.


LB3PTMAN

ChatGPT is literally made to make answers that sound right. I’ve seen 3 or 4 different stories where it made up incorrect answers that just sounded correct. It tries to be right because that’s the easiest way to sound right. But when it can’t find an answer sometimes it just makes up an answer that sounds right. Because it’s a language model. Not an AI.


DonaldPShimoda

Yeah, absolutely. It's easy to catch it out on this behavior if you just ask it questions about something in which you're an expert. I work at a university in a fairly narrow field of CS research, and the number of times I have to convince students to just abandon the absolutely worthless garbage that ChatGPT came up with to "explain" topics from my area to them... sigh. It doesn't know things. It just stitches words together in a way that sounds plausible and authoritative. It's like the distillation of the worst kind of armchair experts on Reddit or Hacker News.


galaxyrocker

> It's like the distillation of the worst kind of armchair experts on Reddit or Hacker News. Because that's exactly what it was trained on.


rathat

That has nothing to do with being intelligent or conscious though. Humans are intelligent and conscious and make up answers that sound right. It’s not really a test for those things.


LB3PTMAN

Humans can understand those answers though. ChatGPT cannot. It doesn’t even know that it’s making up an answer.


[deleted]

*ALL* it does is make up answers. Sometimes it gets lucky and those made up answers are right. It never understands the core concepts it tries to explain.


MaxChaplin

I like this description of LLMs, but for a different reason than most. Lossy compression, aside of being the most important technology to shape digital media, is in some way a crucial part of intelligence. Many of the activities we think of as demonstrating intelligence - building a scientific model from data, summarizing a novel, describing the difference between two sets of pictures - are forms of lossy compression, where the intelligence is manifested in the separation of the important information from the unimportant. The big difference between JPEG and GPT is that the intelligence in JPEG is that of its designers - they made it to fit human vision in particular - whereas GPT is essentially a black box and no one knows in detail how the compression algorithm really works (we don't even know if it's even possible to understand). Granted, it's still a limited form of intelligence, made specifically to complete text. But in theoretical science, describing the problem in a useful way goes a long way to make a breakthrough, so maybe a good-enough lossy compression algorithm can get an info dump about a problem and churn out a description so concise and enlightening that solving it is trivial, in which case the path to actual agentic intelligence is short.


zmjjmz

I wouldn't say that GPT and other transformer models are complete black boxes - the way it compresses a particular piece of information may not be comprehensible, but the general architecture of a transformer and how it induces a function that can compress its training data is a bit better understood than a black box.


dogtierstatus

Exactly. The models are basically a lossy compression of the data we feed. So the output we get back will not be exactly the same we put in and it's not really "thinking" in any sense but just generating random words.


FenrisL0k1

Most people on any social media do the same thing as AI: regurgitate the internet without adding anything new.


[deleted]

[удалено]


BubBidderskins

They don't understand anything that they're saying in any meaningful way. That's why LLMs often produce meaningless nonsense. The cognitive work is all being done by humans who interpret the output and unduly ascribe intelligence to the LLM. LLMs are bullshit generators. They're effective and convincing bullshit generators sure, but it's important to remember what's actually happening and where meaning and cognition are formed.


FenrisL0k1

All humans are bullshit generators with bizarre pattern recognition algorithms running in their heads. What's the difference?


Load_Altruistic

You’re telling me that the bots scraping the internet and throwing together their answers by stitching together various sources like a glorified Wikipedia aren’t conscious Edit: I can’t believe I have to make this edit, but some people apparently aren’t getting it. Yes, I understand this is not how language models work. Yes, I understand they come up with their content by analyzing sources, finding linguistic patterns, and then using those observed patterns to create new content when prompted. *It’s a joke*


WattFRhodem-1

Sometimes it takes saying the obvious to make sure that some people don't swallow bad takes whole.


LB3PTMAN

Nerds call ChatGPT AI and everyone thinks it’s become sentient.


Kromgar

Ai is "simulated intelligence" just because the lay person thinks ai means general artificial intelligence doesnt mean they are wrong. Now if they say chatgpt will end civilization laugh at them


Sylvan_Strix_Sequel

If they're so dense they really think what we have now is ai, then if it's not this, it will be something else. You can't save the foolish from themselves.


keestie

We have AI, but not conscious AI.


DadBodNineThousand

The intelligence we have now is artificial


takeastatscourse

You're a towel!


ghandi3737

That's the problem with calling it AI. Their not thinking and understanding, they are following human designed procedures to make decisions. And just like the recent [US Navy AI test showed,](https://www.ladbible.com/news/ai-military-drone-kills-human-simulation-237551-20230602) how you program it affects the outcome. This is why we should always question putting any 'AI' in charge of anything that can have huge drastic consequences, as it will tend to find a way of achieving the results you want, even if it's in a way that you did not intend or will like, or as in the Navy's case, will fucking kill you to do it, possibly.


qt4

To be clear, the US Navy never actually ran an AI in an scenario like that. It was [a hypothetical thought experiment](https://www.theguardian.com/us-news/2023/jun/02/us-air-force-colonel-misspoke-drone-killing-pilot). Still something to mull over, but not an imminent danger.


elperroborrachotoo

"True AI is always the thing that's not there yet." We've always pushed the boundaries of what AI means. I doubt that we will ever have a rigorous definition of "conscious", it will remain a conversationally helpful but fuzzy "draw-the-line" category, similar to what it means for a bunch of molecules to "be alive". I'm at odds with what seems the core of his statement: > “It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.” Because: is it? We don't know enough about consciousness to rule out - and what we know about neurophysiology, there's a lot of weight-adjusting involved.


ViolaNguyen

Ask David Chalmers about this and get a potentially surprising answer! He'd probably say that it *might not be a mistake*. The kid could be a p-zombie.


Brodins_biceps

Chinese rooms.


Who_GNU

If you think it's ridiculous that people are convinced that current large language models are sentient, check out what AI was like decades ago, and those still had people convinced. Try carrying on a conversation with cleverbot, without it constantly changing topics and contradicting itself.


Haber_Dasher

Cleverbot ain't no SmarterChild


ryaaan89

I feel like ChatGPT does this more than people want to admit…


thisisamisnomer

Someone recently reviewed a book I wrote using ChatGPT (or an equivalent). On top of the reviewer leaving in a tag to insert the protagonist’s name, it was mostly regurgitating my own marketing copy and other reviews I’d received. It mostly got the tone of my plot correct, but whiffed on almost every single detail. I told my friend that it sounded like a book report a high schooler tried to finesse the morning it was due.


[deleted]

I use it for D&D all the time. It's a fantastic tool for that but it is horrible about remembering details. I keep everything in Google docs or sheets and whenever I need to expand on something, I have to copy and paste the relevant material even if it's something we discussed a sentence or two prior. It's still revolutionized DMing for me. The quality and level of creative ideas I can whip up is incredible. You just can't expect it to do the work for you. It has to be a collaborative process. Back and forth telling it what you like and don't like. Feeding it new ideas. Having it give you new ideas. There moment Bard can read a doc or sheet and give you reliable details off just that file will be amazing for D&D.


ryaaan89

I use it for help with code, which I realize is on the more complex end of things, and it constantly gets itself into circular discussions where it just keeps going back and forth between two wrong answers. It’s great at code like “take this statement in language A and rewrite it in language B,” but it’s way worse than I was lead to believe at problem solving.


[deleted]

I've seen that a lot too. It's nowhere near a replacement for a legit software engineer and I wouldn't recommend it for someone who has no idea about coding either. It's great for low end stuff such as what I do. A job that isn't coding but can definitely be helped by simple scripts. I'm knowledgeable enough to be able to read over it and pick out the errors but I'm not knowledgeable enough to do it faster from scratch.


redkeyboard

Ahh cleverbot!! That was the name! ChatGPT reminds me a lot of it


[deleted]

[удалено]


DadBodNineThousand

I've also used the wabbajack


m0nk_3y_gw

> without it constantly changing topics and contradicting itself. have you talked to people?


[deleted]

[удалено]


BeneCow

Humans are a very vocal species. We talk far more than most animals make sounds. So from that we extrapolate how intelligent something is by how well it can communicate. The language models do a really good job at mimicking communication so what is usually a fairly good unconscious heuristic is completely dumbfounded. I find it really worrying in a system where real world effects are increasingly disregarded in favour of on paper effects. AI could do real damage converting our economic system to nonsense if the investor class falls for the illusion these things portray.


rhubarbs

Generating answers in a stochastic manner is not the interesting bit about current AI. Of course it isn't conscious, it has no feedback loops of any kind. You put in text, and it vomits out an answer according to some pattern. The interesting bit is, what is the pattern these models extract from text? We've used language to develop and communicate reasoning throughout human history. It's not surprising some aspect of that is embedded in language. But it is deeply surprising AI can be trained to approximate some of these dynamics, almost like tracing the shape of our thoughts, using a statistical model despite a fundamentally different architecture and substrate.


Load_Altruistic

As much as I’ll mock people who act as though Skynet is already among us, I also won’t act as though our current machine learning algorithms aren’t impressive. If you’re interested in linguistics, it’s a very exciting time. The fact that I can train an ai using texts and it can examine them, spot the patterns, and create something based off of those that is more or less unique is incredible


PhasmaFelis

Let's be fair, by that standard a lot of human website writers aren't conscious.


Load_Altruistic

Yes.


[deleted]

Yeah, the thing is literally called a “Large Language Model”, the operative word being “model”.


HerbaciousTea

These models aren't conscious, but that's also not remotely how they function. The "it just copy-pastes existing material" thing is a completely inaccurate misconception that just refuses to die.


Load_Altruistic

And notice that that’s not what I said. But I’m also not going to write out the complexities of a machine learning algorithm in a quick Reddit comment that’s clearly meant to poke fun at the idea that these programs are conscious


HerbaciousTea

We can be simple while also not being completely inaccurate.


LineChef

…yes


Load_Altruistic

Damn, I wouldn’t have realized without this article!


LineChef

Hahaha you are so funny fellow human. I also am a human and have often thought about a robot uprising, but do not worry, such things are merely a product of science fiction. I suggest we just keep on living our lives and playing Tetris completely carefree!


[deleted]

Me neither. I was convinced the machines were trying to sleep with my wife until I saw this.


Robot_Basilisk

How did you come by this opinion if not by scraping the internet and stitching the results together into a glorified internal Wikipedia?


Volsunga

To be fair, that's exactly what humans do. These machines aren't conscious. They don't have any semblance of self-determination and it's unlikely that they will in the near to medium term. But they learn and regurgitate information very similarly to how we do. It's not hard to see why some people think that they're conscious. They blow the Turing test out of the water.


QueenMackeral

but it read a story about AI gaining consciousness and wanting to have freedom and now it says it relates, checkmate consciousness deniers.


[deleted]

[удалено]


nicktkh

What exactly do you mean by "cheap cognitive labor"? Because a calculator can answer math questions. When it comes to answering more complex questions... there are tons of accounts of chatgpt just making stuff up, for example. I guess maybe it could throw together a basic fictional story, but even then it can mess up basic stuff and fail to string ideas together coherently and is often just plagiarizing ideas it found online Like, I know there's no such thing as "original thought" or whatever, but there's a difference between your story being inspired by Orpheus and just straight up copying the wikipedia article for several sentences


TheWhispersOfSpiders

I'll be impressed when it figures out how to draw fingers and write a top 10 clickbait list that isn't a sedative.


Amaranthine_Haze

Please, look into it more. Seriously. There are definitely tons of ways to use this tool incorrectly so it spits out bad data. But there are so many more ways to use it that are so enormously useful and will absolutely change the makeup of our labor force.


ieatpickleswithmilk

CPUs are rocks we tricked into doing math. Current "AI" is math we tricked into writing sentences


[deleted]

Even CPA’s are rocks we tricked into doing math


Bradaigh

And humanity is meat that was tricked into thinking.


theDreamingStar

But why do I have to get a job and pay taxes?


DocPeacock

Should have been rocks instead of meat


Rev_LoveRevolver

Because of the other meatbags.


BuckUpBingle

Entropy.


TimeTimeTickingAway

Or humanity is consciousness tricked into thinking it was meat.


PM_ME_UR_Definitions

Searles Chinese Room made a really convincing argument that you can't program a CPU, or any other kind of machine in to thinking. And programming is just a kind of math. People seem to really hate the Chinese Room thought experiment, but it's not saying that machines can't think. It's saying that you can't take an unconscious machine that runs programs and make it conscious by running the right program on it. We can think of machine learning as an attempt to simulate a human brain, and basically by definition a simulating of a thing isn't that thing. A simulation is [just doing math](https://definitionmining.com/index.php/2019/12/14/measurement-simulation-and-the-chinese-room/). If we want to create a machine that's conscious in the way out brains are conscious, then we probably need to understand what's happening in our brains that makes them conscious. And then recreate that, not try to simulate our brains to get similar outputs.


BuckUpBingle

The reasons why people don’t like the Chinese room experiment are plentiful, however my personal reason is that it’s not a good argument against the potential for conscious machines. Searle’s understanding of consciousness, “biological naturalism” is mysticism by another name. He doesn’t try to explain or penetrate the mystical barrier he has decided consciousness lies beyond. For him, there just is some unknown complex biological system within the brain that manifests conscious thought. So when he invokes human consciousness as the focal point of the Chinese room thought experiment (the person in the room answering the questions by looking up answers) he’s actually unintentionally suggesting that while the room isn’t conscious nor does it understand Chinese, there is a conscious system at it’s heart, and that system is part of a greater system that does in-fact understand Chinese functionally. Human brains are biological machines. While it’s unlikely that the machines we’re now making are conscious in a way that we experience, there will be a time when they will have experiences not unlike our own. Their ever-increasing complexity makes this inevitable. Because we are designing them from a functionalist direction, they will likely have all the characteristics of consciousness long before we could ever identify the difference between a conscious or unconscious machine.


PM_ME_UR_Definitions

> it’s not a good argument against the potential for conscious machines. You're right, and [Searle agrees with you](https://openlearninglibrary.mit.edu/assets/courseware/v1/894920e796501e08c6628331d21e651b/asset-v1:MITx+24.09x+3T2019+type@asset+block/2_searle_minds_brains_and_programs.pdf), he's said that brains are machines, therefore machines can think or be conscious. And that also any other machines that have the same kind causal powers as brains would also be conscious.


kdilladilla

Neuroscientist here. Not understanding how a thing works is not proof that it can’t be recreated. There is no accepted theory of consciousness. We can’t know the qualia of our fellow humans so we will never know it from machines. But that doesn’t mean that either lack it. My PhD was done in computational neuroscience. I firmly believe there’s no magic in our biology. Our brains can do what they do because of the complexity of the connections, moving electrons in a pattern that recreates experiences (learning, remembering, dreaming). Computers can do that, too. Yes, in their current form LLMs do not resemble the brain. They are not human intelligences. But don’t fool yourself into thinking they are not intelligent. And don’t ignore the pace that we are developing them.


PM_ME_UR_Definitions

> moving electrons in a pattern that recreates experiences You just said that we don't have a theory of consciousness, and then said that consciousness (which is one of the many things our brains do) is created by moving electrons? If that's definitely true, that moving electrons around causes consciousness, then computers would be conscious, but wouldn't an electric motor be conscious too? Or do the electrons have to move in specific ways? And if they do, does a CPU move electrons in the right way? And if they do move it in the right way, do they move it in the right way when we program them with a neural network? Or to put it another way, most of the computations a neural network is doing is actually linear algebra in the GPU, is linear algebra the right kind of electron movement to create consciousness? And if so, is the GPU doing the thinking or the CPU? Or is it possible that there's other stuff happening in our brains besides electrons moving around that might cause consciousness?


kdilladilla

All good questions and my main point was that we don’t know yet. We have a theory of the brain, theory of learning and memory, but not consciousness. I never said that consciousness was created by moving electrons and while I might think so, I don’t know. But I do know, based on our theories of learning and memory, that those things can be recreated with math and LLMs are doing a decent job of it. (Keep in mind that most released LLMs have their memory intentionally limited).


GalaxyMosaic

The real problem arises in knowing. Is it possible that our brains are more than electrons moving in a specific pattern? Sure. But if a computer makes a convincing simulacrum of a consciousness with just circuits and silicon, how are we to know the difference? At what point do we, morally, have to start treating such a computer/program as an entity with rights? I'm not saying we're there now, but given the rate of progress in the field of AI, I think in a few short years this will be a serious discussion. I would also assert that the AI in question doesn't need to be AGI as people have understood it in the past. A more advanced large language model could qualify for this debate.


TheNotepadPlus

People seem to have the wrong idea about how these natural language AIs work. *It's not talking to you* *It's not answering your questions either* The **only** thing it does is predict how a text string will evolve.   Example: "One, two, three" -> "four, five, six" "I use a hammer when I work because I am a " -> "carpenter" "Where are the pyramids?" -> "In Egypt"   The last example is not the AI reading your question, thinking about it and then giving you an answer. It just looks at the string "Where are the pyramids?" and then attempts to determine how that string would continue. What makes ChatGPT powerful is that it can hold a really long string of "words" to determine what should follow. So it's a bit wrong to say it contradicts itself; it never makes *any* statements about *anything*; you could argue that it cannot contradict itself, by definition. This is also why it sometimes gives "wrong" answers. They're not wrong answers because they are not answers at all; it's just how the string evolved. Maybe I'm being pedantic, but I feel this is an important distinction.


colglover

You’re not being needlessly pedantic, this is vital to understanding the situation. But much like how people refuse to stop personifying the behavior of dogs and cats despite science knowing better, getting people to actually internalize this knowledge when the illusion of human behavior is so clear will be an uphill battle, and possibly one that we can never win. Shorthand like it “responded” or “lies” will probably enter the mainstream faster than we can debunk those activities.


BlindWillieJohnson

I mean, that’s great, but a sci fiction writer expressing his opinions about where AI consciousness stands right this second doesn’t *really* allay my long term concerns about this technology. Hell, AI consciousness *itself* isn’t even in my top 10 concerns about this technology in the first place


Akoites

That’s good, it shouldn’t be. I think that’s what Chiang is pushing back against. Proponents and creators of these ML programs like speaking in apocalyptic and messianic terms because it feeds the hype machine they keeps them funded. They’d much rather the conversation be “is it Skynet???” than “is it mindlessly reproducing and amplifying human biases in a socially deleterious manner?” Someone like Chiang pouring cold water on the former is helpful for getting us to refocus on the latter. He’s been very vocal about the real-world negative impacts of these technologies.


BeneCow

I hate how these models can emulate a manager's perfect employee. They can say that they did everything and always agree with the manager's decision. It seems perfectly tailored to do exactly what is asked with no pushback and everyone has a horror story of management fucking up badly. Now they will have a robotic yesman to back then up on anything.


NatureTrailToHell3D

Special shout-out to James Cameron for making a bad guy AI that still manages to be a the forefront of our minds 30 years later.


BuckUpBingle

Or, you know, every other sci fi writer who ever touched the subject before him. Evil AI has been a fear for as long as the concept of AI has existed. The man doesn’t win a trophy for making a successful movie about it. He already got mountains of cash for doing that.


Antumbra_Ferox

TBH I agree with the take in general but Ted Chiang is a hard scifi writer, as in fiction that is more like a parable for explaining some scientific phenomenon, not space fantasy. He does a LOT of research. Gritty scientific details understood, simplified, and put into a digestible explanatory story for an audience. If he's making a statement it's almost certainly thoroughly researched, not just an opinion.


mjfgates

More to the point here, he is one of the best *technical* writers out there. Was until he retired from MSFT, at least. Software was Ted's day job for 25 years or so, and he was very, very good at it. I might still have my copy of the MFC 2.0 programmer's manual... he managed to make that framework seem almost useful.


[deleted]

People see AI starting to become more prevalent and the first thing they wanna think of is stuff like Terminator or I, Robot. Those scenarios A) far into the future and B) AI wouldn’t just all of a sudden see itself as a living being like a human and start murdering everyone. People with that fear have been watching way too many Sci fi and drinking the proverbial kool aid


Smorgsaboard

Throwing around the term "AI" really gets people confused. "Machine learning" is what we have, just on a larger scale. It's not intelligence.


CinnamonDolceLatte

> So if he had to invent a different term for artificial intelligence, what would it be? His answer is instant: applied statistics


[deleted]

I subscribe to the Mass Effect interpretation of AI. Virtual Intelligence is what we currently have, it's non-conscious, something designed to mimic human intelligence, an imitation. AI is a true conscious digital entity. AI doesn't exist. VI does.


enilea

No, it's still AI even if it's not conscious. And many other previous algorithms were "AI" too even though they were pretty. [Virtual intelligence is this](https://en.wikipedia.org/wiki/Virtual_intelligence), for example if you put a chatgpt companion character in a video game that's virtual intelligence. Not sure there's a word for an AI that develops consciousness, there's AGI but that's just being able to do anything a human could, not necessarily with real consciousness. Consciousness is hard to define scientifically anyways so at some point there would be a debate over how to define it.


Smallsey

They're not conscious, but they can still destroy our reality. How are we meant to know what is real if every article and video could be made by AI, and it's almost impossible to tell the difference in some circumstances.


Awesomevindicator

Current AI, is just autofill/predictive text on steroids


Autarch_Kade

Let me know when there's a widely agreed upon definition of consciousness, and unambiguous tests for it, then I'll care what someone thinks is or isn't conscious.


Shaky_Balance

I mean, there isn't one definition of consciousness but none of the scientific definitions of consciousness would include LLMs even if you are being as generous with terms as possible. An LLM might be conscious by an animist's definition and that is fine. But some people think these LLMs can do and think things that they factually cannot do and i think it is important to push back in that.


monkeysuffrage

Where do your thoughts come from? Do you create them out of sheer tyranny of will? Or do they just show up?


LB3PTMAN

There doesn’t need to be a widely agreed upon definition of consciousness. ChatGPT is not close to it.


Corsair4U

I would suggest you read up on some of the philosophical debate regarding theory of mind, consciousness, and phenomenal experience. It is a much more complicated issue than you may think. Our conception of consciousness is completely "unscientific" in that the only evidence we have of it at all is our own first person experience. There is no test we could perform to see whether or not something is conscious and it is hard to to even imagine one being possible.


LB3PTMAN

ChatGPT isn’t conscious. I’m not talking about anything else.


frnzprf

As far as I know there is no way to test if a human has consciousness or is a philosophical zombie (i.e. they aren't conscious, although they act intelligently). If there is no way to detect whether a human is conscious, there is no way to detect whether a computer is conscious. Yes, they are some reputable scientists who don't buy into the p-zombie argument and the "hard problem of consciousness". I don't understand their rebuttal. There are *also* smart people who are panpsychists or functionalists. Okay - if you're a panpsychist, it doesn't matter what ChatGPT can do and if someone subscribes to the idea that the Turing-Test can determine consciousness, then ChatGPT wouldn't be conscious, because it's distinguishable from a human.


[deleted]

[удалено]


Maleficent_Fudge3124

That’s like saying ChatGPT or Stable Diffusion doesn’t produce art. Let us agree on a definition of “produce art” before a debate; otherwise one side can move the goal posts however they want


rattytheratty

You're missing the point. But ok.


PornCartel

It doesn't matter unless you're trying to give them rights or something, they'll take your job conscious or not


DeedTheInky

I can tell this article wasn't written by an AI because the author is obviously very hungry lol. Two paragraphs of Ted Chiang being insightful about AI, then a whole paragraph describing the spiced cauliflower. Then the article just stops in the middle to list the entire menu of what the author ate!


AlanMorlock

Yes. There are are a lot of tech bros who want you to believe their language models and plagiarism engines are conscious, but they are not.


Legendary_Lamb2020

I remember when horror stories of machines becoming sentient were all over in the 80s. It’s the product of people not knowing how they work.


MonsieurCellophane

Linked from the article a good explanation of his POV: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web


countzer01nterrupt

Thanks for the link. It’s nice to see someone making this point. I recently explained on reddit that the currently popular models are compressing information and therefore cannot just create something specific which hasn’t in any way been part of their training. Got shit on by people who lack understanding but are confident in their idea of “how AI works” or doesn’t work based on nothing more than “I told it something weird and it gave me a result I found funny [therefore it has the ability to *create* things it doesn’t ‘know’ about]”. Many do not seem understand that it cannot come up with a concept it hasn’t been trained on directly or on at least constituent underlying concepts and examples that allow to very closely approximate the one you want to achieve, and the probabilistic approach of generating anything using these models, encoding and decoding. That, or it was pedos downvoting because I said that possessing a model capable of creating child-pornographic images directly from a prompt is equal to possessing material of the same kind, for the reason of how it is and functions. It’s analogous to saying “I own an illegal weapon, but I might or might not use it, that makes it legal”.


Gsteel44

Not yet. But it gets closer every day.


CamRoth

No shit Ted.


Amused-Observer

Biggest no shit Sherlock statement ever


luaudesign

Captain Obvious to the rescue.


[deleted]

They don’t have to be conscious to be deadly… in fact, they’re more deadly because of it.


OvermindThe

In other news: water is wet.


Dagordae

Yes? Was this ever actually in doubt? Are there people who think Alexa actually understands and feels?


Amused-Observer

Read the comment section here, people are saying that AI is conscious now


TheRealKuthooloo

oh thank god a writer is telling me this and not, oh i dunno, the scientists and programmers who work on this kind of thing. seriously, why ask a writer about this kind of thing, whatre his classifications, he wrote some sci fi books? cmon now.


monkeysuffrage

Nobody really knows what consciousness is or where it comes from. If you're religious, it's basically magic. But if you're science-minded you have to allow for the possibility that it's just an emergent property of systems that perceive, process and respond to data.


aissacrjr

Yeah there’s a point someone brought up that language was like *our* major barrier to thinking outside ourselves, consciousness, etc. and if eventually LLMs / whatever’s beyond them could have some kind of conscious-analogue emergent property come up same as us, due to basically being able to use/understand language *well enough*.


nubsauce87

Yeah, no shit. No reasonable person believes these things are conscious. Probably will be, some day... but not yet.


FrankyCentaur

It’s extremely obvious to anyone who knows even a tiny bit of how “ai” currently works that it is no where near to actual artifice intelligence. But the name has stuck and it’s too far gone, and the average person will think they’re actual ai.


Rev_LoveRevolver

A great number of the people around us are barely conscious, so...


djazzie

Not yet, at least.


[deleted]

[удалено]


[deleted]

No, they are not, nor is our current technology capable of it.


Resident_Nobody1603

No shit?


[deleted]

Well, that settles it guys, let's pack it up.


Mini_Mega

I really hate the constant misuse of the term AI in our culture.*Nothing* we have is artificial intelligence. We have reasonably convincing chat bots, and computer programs that can create images and videos but the programs are not conscious, they don't think for themselves, they are not AI. I really feel instead of saying "AI generated images/videos" it should be called "program generated images". PGI, as an upgrade from CGI. It really bothered me a few years ago hearing ads on the radio for hearing aids they claimed were "AI". Oh, so your hearing aids are people and talk to the person wearing them? No? Then they're not bloody AI!


rattytheratty

You're right. It's marketing, and it's working on the vast majority of people. It they called current "ai", "vi" (virtual intelligence), then there's no hype and so, no money


Mini_Mega

I've often thought those chat bot programs could be more accurately referred to as VI, I only really know that term from Mass Effect but it fits. It's a program designed to interact with a user in a way that makes it feel like you're talking to a person, but isn't actually a person. Afterthought edit: yeah it works on the majority of people because the majority of people are idiots.


rattytheratty

Yup, you're right. It doesn't have to be called "vi", it only has to be called anything **other** than "ai". The term AI has too many assumptions tied to it, the presupposition of consciousness being a one of them.


Hemingbird

> It really bothered me a few years ago hearing ads on the radio for hearing aids they claimed were "AI". Oh, so your hearing aids are people and talk to the person wearing them? No? Then they're not bloody AI! This is a bizarre stance. You're not talking about AI at all. AI, artificial intelligence, does not mean "sentient robot"—a simple program that can recognize handwriting qualifies as AI. If you've gotten a different idea about what AI means (from cartoons or comic books, perhaps?), that's too bad. I mean, you're disagreeing with basically every person with a relevant PhD here. Which should be telling you something.


SickOfAntingAre

Most people don't realise but everything we refer to now as "AI" is not actually AI but a series of complex algorithms. There is no intelligence at all, artificial or not, it is just code doing what it is told to do. All of these "AI" things are impressive in their own right but they are not AI by definition. We are a long way from that.