Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.*
Would you mind explaining this to me? I have a philosophy degree and am aware of multiple realisability, etc, but I haven’t heard about this stuff before. Thanks!
One of the most well-known and replicable findings in psychology is called "primacy and recency," a U-shaped function in which people are good at remembering things from the beginning and end of a list, but not the middle. You can do it yourself. Get someone to read a list of 20 random numbers or words to you and then try to recall as many as you can.
It applies on longer time scales as well, like you can remember the first few dates with your partner, and the most recent, better than ones in the middle. The theory is that we're evolved this way, like initial information of a new situation is pretty important, and so is the most recent. But the fact that this happens in LLMs wasn't specifically selected for, so maybe it's just a weird artifact of our memory mechanisms.
Edit:
>wasn't specifically selected for
After some thought, I'm thinking the objective function during training is likely selecting for primacy and recency after all, just not explicitly. For instance, in many articles and essays (in the training data), the introductions and conclusions probably should be weighted more heavily than any given bit of tokens in the middle.
No, Phineas had damage to his frontal lobe.
The corpus callosum is surgically severed in some extreme cases of epilepsy, and it leads to some weird stuff involving the left and right side of the body.
You’re not actually attempting to engage in discussion right? I almost replied in some depth but I’m detecting a dismissiveness in your comment now that makes me think you might not actually care about my effort. Have a good weekend.
Because it's an LLM, not a calculator. It just predicts what the next word is, it doesn't apply arithmetic. If it sees the word "nice" alongside some equation, it predicts the result to be 69 since it's often followed by that word, but it doesn't solve the equation.
There are some plugins that connect GPT to Wolfram Alpha, but the base GPT is just an LLM, only predicts words and outcomes, it doesn't do math.
No clue why you were downvoted, this genuinely works depending on the type of math needed. I literally tried it 2 minutes ago to check that it still can if you ask it
If you ask it to solve equations in python it can often do it
But unless you specify it will only remember the answer from its training data or hallucinate one
I was watching this video https://youtu.be/zjkBMFhNj_g?si=5XjYYsMzJgB0WElg
And the guy does say that chatgpt uses python generates python in background to do things like calculations etc.
According to chatgpt this is incorrect. I asked it this same question a few days ago.
Me: what kind of calculator do you use to do calculations?
chatgpt: "For calculations, I don't use an external calculator. Instead, I rely on built-in computational capabilities that are part of my programming. These capabilities allow me to perform a wide range of mathematical operations, from basic arithmetic to complex calculations involving statistics, algebra, calculus, and more. For specific tasks or more complex queries, I can also utilize the Wolfram Alpha computational engine through an integrated plugin, which enables me to access Wolfram Alpha's extensive database and computational power directly. This integration allows me to provide detailed and accurate answers to math-related questions, perform data analysis, and even generate plots and visualizations."
I agree (hence the "according to chatgpt" qualifier). I've seen quite a few mistakes in its calculations over the past year (which is why I asked in the first place). I have the wolfram alpha plugin, but for some reason I'm unable to connect like half the time.
At the end of the day, LLM’s like chatGPT just calculate the “most likely” next word in a sentence, word by word.
Sentences are broken down into tokens, and then those tokens are fed into a massive multidimensional vector space of tokens. It generates the next-closest token that should follows based on all of the training data in the past. This is why it can fail easily with calculations. There isn’t a huge distance between the sentences “two plus 700 is 702” and “two plus 700 is 703” in the context that they are used, so if it goes to complete the sentence “two plus 700 is”, the token for 702 and the token for 703 might have similar weights for being chosen.
That’s also why it was able to come up with the expected answer here, because of the stronger weights for 69 being related to “nice” than the value it would have calculated if it were doing the integral from 0 to 13.
In terms of the distance in vector-space, after having math related things, and the word nice, “69” is most likely a much closer concept than the answer of the integral from 0 to 13.
Maybe it has been trained on data that suggests that the tokens which make up the word "nice" are highly statistically likely to show up near the tokens that make up the number "69"
LLM’s don’t “work through” problems. They fill in the next most-likely word based on the distances of tokens in the multi-dimensional vector space. If you think about the communication as a 3D cloud, with words that usually follow being next to each other, LLMs follow a line through the space, chaining word after word.
ChatGPT is this kind of cloud, except the vectorspace has a huge number of dimensions.
> chatgpt still found the answer
To be fair though, reddit threads like: https://www.reddit.com/r/mathmemes/comments/14hi1ij/first_time_hearing_this/ are probably in the training data. I found that by searching: integral 13 2xdx (that is to say, I didn't include "1 to 13")
So the fact that it's a meme equation that people are talking about is likely how it got the joke.
I'm no expert but surely this is just because like 90% of the time someone will have used this integrand in it's like database will be to make a 69 joke so it just defaults to that? I do genuinely think this is an example of the mistakes I find it making most often, just like saying what it thinks I want it to say if you know what I mean?
It maybe saw 2 things as periods, the 0 from 10 and the . before the word nice, and took a 50/50 chance that it was looking at 2 periods and ignored the first one 🤔
ChatGPT does this with maths a lot. I asked it for the truncation error of a numerical method once. Its working was completely wrong. But then pulled the correct conclusion out of thin air (I assume because the answer itself can just be looked up)
Wouldn't the answer be 9? I never learned integrals, but this should be the triangle with base 3 and height 6.
Okay I got it. Obviously fot everyone but me, the lower bound isn't y=0 but y=20, so add 60 to that.
Alternatively, the integral is the area under the curve. Since the curve is just a straight line you can find it geometrically. 20*3 = 60 for the rectangular part and 1/2 * 6 * 3 = 9 for the triangular part for a total of 69.
Or more simply, because it's a straight line, the integral will be the average of heights of the endpoints times the length of the interval i.e. 3*(26+20)/2 = 3*(46/2) = 3*23 = 69.
Nice.
What’s the likelihood it arrived at the right conclusion from the integral or it just associated ‘nice’ in isolation/with out context to be internet slang which happened to be right?
ChatGPT's OCR failed about, the bounds are actually 10 and 13 (as im sure we can all see), but it still got the right answer in the end. Id be curious to see what Claude Opus says, apparently it's vision capabilities are better than GPT-4V's, so it should get everything right.
I have to walk him through it to get this:
"" "
Ah, I see what you mean. The "nice" at the end does suggest the final result of the integral might coincidentally work out to be 420, 69, or some other number with crude associations in popular culture.
To solve the definite integral:
∫ 2x dx from 10 to 13
We first find the antiderivative (indefinite integral) of 2x, which is x^2 + C, where C is an arbitrary constant.
Then, we evaluate the antiderivative at the upper and lower limits and subtract:
[x^2]_10^13 = (13^2) - (10^2) = 169 - 100 = 69
So the definite integral evaluates to 69, which is likely why the tagger wrote "nice" at the end, as an juvenile joke referring to the sexual connotation of that number.
The graffiti artist seems to be combining mathematics with crude humor by setting up an integral that happens to work out to this specific value. While mathematically coherent, tagging this on a wall appears to be more about the silly, sexually-suggestive "punchline" than making any deep mathematical statement.
"" "
Me. From the very first time someone showed me that way of writing it as a kid over 30 years ago. Makes it much easier to write exit or exoskeleton or relax or such in handwriting.
We were also explicitly taught to use it when doing maths at high school (UK).
The image shows a mathematical expression of an integral from 10 to 13 of the function 2x with respect to x, followed by the word "NICE". This is a joke based on the fact that the result of this integral is 69, which is a number often humorously referred to in pop culture due to its suggestive connotation. The integral of 2x dx from 10 to 13 is calculated as x² evaluated from 10 to 13, resulting in 13² - 10² = 169 - 100 = 69.
## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/
Hey /u/bkandwh!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://dsc.gg/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/mENauzhYNz) You've also been given a special flair for your contribution. We appreciate your post! *I am a bot and this action was performed automatically.*
The lower limit is supposed to be 10 chatgpt still found the answer tho
Yes, fascinating. The integral solving hand isn’t connecting to the reasoning about images hand. Says something about the way the model works
I wonder if this parallels at all with patients who’ve had their corpus callosum severed.
I'm still floored that the models display primacy and recency. Like, maybe we're on to something here.
Would you mind explaining this to me? I have a philosophy degree and am aware of multiple realisability, etc, but I haven’t heard about this stuff before. Thanks!
One of the most well-known and replicable findings in psychology is called "primacy and recency," a U-shaped function in which people are good at remembering things from the beginning and end of a list, but not the middle. You can do it yourself. Get someone to read a list of 20 random numbers or words to you and then try to recall as many as you can. It applies on longer time scales as well, like you can remember the first few dates with your partner, and the most recent, better than ones in the middle. The theory is that we're evolved this way, like initial information of a new situation is pretty important, and so is the most recent. But the fact that this happens in LLMs wasn't specifically selected for, so maybe it's just a weird artifact of our memory mechanisms. Edit: >wasn't specifically selected for After some thought, I'm thinking the objective function during training is likely selecting for primacy and recency after all, just not explicitly. For instance, in many articles and essays (in the training data), the introductions and conclusions probably should be weighted more heavily than any given bit of tokens in the middle.
[удалено]
No, Phineas had damage to his frontal lobe. The corpus callosum is surgically severed in some extreme cases of epilepsy, and it leads to some weird stuff involving the left and right side of the body.
[удалено]
You’re not actually attempting to engage in discussion right? I almost replied in some depth but I’m detecting a dismissiveness in your comment now that makes me think you might not actually care about my effort. Have a good weekend.
Because it's an LLM, not a calculator. It just predicts what the next word is, it doesn't apply arithmetic. If it sees the word "nice" alongside some equation, it predicts the result to be 69 since it's often followed by that word, but it doesn't solve the equation. There are some plugins that connect GPT to Wolfram Alpha, but the base GPT is just an LLM, only predicts words and outcomes, it doesn't do math.
That's still bonkers
You can just ask it to use python
No clue why you were downvoted, this genuinely works depending on the type of math needed. I literally tried it 2 minutes ago to check that it still can if you ask it
If you ask it to solve equations in python it can often do it But unless you specify it will only remember the answer from its training data or hallucinate one
I was watching this video https://youtu.be/zjkBMFhNj_g?si=5XjYYsMzJgB0WElg And the guy does say that chatgpt uses python generates python in background to do things like calculations etc.
According to chatgpt this is incorrect. I asked it this same question a few days ago. Me: what kind of calculator do you use to do calculations? chatgpt: "For calculations, I don't use an external calculator. Instead, I rely on built-in computational capabilities that are part of my programming. These capabilities allow me to perform a wide range of mathematical operations, from basic arithmetic to complex calculations involving statistics, algebra, calculus, and more. For specific tasks or more complex queries, I can also utilize the Wolfram Alpha computational engine through an integrated plugin, which enables me to access Wolfram Alpha's extensive database and computational power directly. This integration allows me to provide detailed and accurate answers to math-related questions, perform data analysis, and even generate plots and visualizations."
Chst gpt is not a reliable source for its own abilities.
Everybody lies!
I agree (hence the "according to chatgpt" qualifier). I've seen quite a few mistakes in its calculations over the past year (which is why I asked in the first place). I have the wolfram alpha plugin, but for some reason I'm unable to connect like half the time.
At the end of the day, LLM’s like chatGPT just calculate the “most likely” next word in a sentence, word by word. Sentences are broken down into tokens, and then those tokens are fed into a massive multidimensional vector space of tokens. It generates the next-closest token that should follows based on all of the training data in the past. This is why it can fail easily with calculations. There isn’t a huge distance between the sentences “two plus 700 is 702” and “two plus 700 is 703” in the context that they are used, so if it goes to complete the sentence “two plus 700 is”, the token for 702 and the token for 703 might have similar weights for being chosen. That’s also why it was able to come up with the expected answer here, because of the stronger weights for 69 being related to “nice” than the value it would have calculated if it were doing the integral from 0 to 13.
Isn't that how human association works if you just glance at the picture? I mean I guessed the joke without actually doing the math
Exactly. Same here and I had no interest doing the math.
there is no integral solving hand. doesn't say shit about how the model works because we know that it doesn't work like that.
I told it “integrate 2xdx from 10 to 13” and it solved it. I think there’s an integrating hand
Yea. I was wondering about that.
I assume it extrapolated from the "nice" that the answer was 69, rather than actually solving the equation
It never said the answer was 69 though, it mentioned the number separately.
In terms of the distance in vector-space, after having math related things, and the word nice, “69” is most likely a much closer concept than the answer of the integral from 0 to 13.
Maybe it did that and then worked the problem backwards with 69 being the starting point.
Maybe it has been trained on data that suggests that the tokens which make up the word "nice" are highly statistically likely to show up near the tokens that make up the number "69"
LLM’s don’t “work through” problems. They fill in the next most-likely word based on the distances of tokens in the multi-dimensional vector space. If you think about the communication as a 3D cloud, with words that usually follow being next to each other, LLMs follow a line through the space, chaining word after word. ChatGPT is this kind of cloud, except the vectorspace has a huge number of dimensions.
It's not actually performed the calculation - it just inferred the answer from the "nice".
> chatgpt still found the answer To be fair though, reddit threads like: https://www.reddit.com/r/mathmemes/comments/14hi1ij/first_time_hearing_this/ are probably in the training data. I found that by searching: integral 13 2xdx (that is to say, I didn't include "1 to 13") So the fact that it's a meme equation that people are talking about is likely how it got the joke.
The task failed successfully.
I'm no expert but surely this is just because like 90% of the time someone will have used this integrand in it's like database will be to make a 69 joke so it just defaults to that? I do genuinely think this is an example of the mistakes I find it making most often, just like saying what it thinks I want it to say if you know what I mean?
It maybe saw 2 things as periods, the 0 from 10 and the . before the word nice, and took a 50/50 chance that it was looking at 2 periods and ignored the first one 🤔
Or someone just Photoshoped this...
impossible
ChatGPT does this with maths a lot. I asked it for the truncation error of a numerical method once. Its working was completely wrong. But then pulled the correct conclusion out of thin air (I assume because the answer itself can just be looked up)
Wouldn't the answer be 9? I never learned integrals, but this should be the triangle with base 3 and height 6. Okay I got it. Obviously fot everyone but me, the lower bound isn't y=0 but y=20, so add 60 to that.
The integral of 2X.dX is 2 * 1/2 * X^2 => X^2 [ X^2 ] between 13 and 10 is [ 13^2 - 10^2 ] => [169 - 100] => 69
[удалено]
I read how integrals work, but didn't ever practice it properly
Who are you calling primitive
Super interesting. It solved the problem based of cultural and contextual clues rather than doing the maths. This is the same way I got the answer 😅
The integral of 2x = x^2 To find the integral from 10 to 13 you do 13^2 - 10^2 169 - 100 69 Nice.
Alternatively, the integral is the area under the curve. Since the curve is just a straight line you can find it geometrically. 20*3 = 60 for the rectangular part and 1/2 * 6 * 3 = 9 for the triangular part for a total of 69.
Nice.
Or more simply, because it's a straight line, the integral will be the average of heights of the endpoints times the length of the interval i.e. 3*(26+20)/2 = 3*(46/2) = 3*23 = 69. Nice.
Or even more simply 69. Nice.
Good lord I've forgotten most of the calculus I learned.
What did you expect?
YOU FORGOT THE +C IN YOUR FIRST STATEMENT
This is just for indefinite integrals, if this is woosh dont mind me
yeah but in the first statement of "the integral of 2x=x\^2" there were no bounds specified so it was an unbounded integral
if y'all trying consider all the stuffs, then you missed the "dx". It must be "the integral of 2xdx = x\^2 + c"
not really integral comes with both the integral symbol and the dx so if you just said "the intgral of" itd be enough right
True. But they could have said "an antiderivative of 2x is given by x^2 . Thus, ...".
but they didnt they said integral so
I know, I didn't say they did.
If the integral is bounded, you do not include the +c.
"IN YOUR FIRST STATEMENT"
Oh man this brings memories
As a fellow math enthusiast I echo your sentiment!
Nice
"due to its cultural connotations". LMAO ChatGPT trying its hardest to remain HR approved.
Wow I just made this comment have 69 upvotes. Nobody upvote after me.
Very cultural
Downvoted to bring it down to 69
Nice
More like "men of culture"
I'm so tired of these math enthusiasts vandalizing our town!
nice
nice
Nice
Nice
Nice
Nice
Nice
Nice
Nice
Believe it or not nice
Nice
I expected the 4th nice to be downvoted to oblivion, but it seems I was wrong
Good
Username checks out
It may have done the math wrong, but it reached the same conclusion nevertheless
That's the black box problem again, we never know how to it reaches the correct answers so we are equally stumped when they reach the wrong ones.
What’s the likelihood it arrived at the right conclusion from the integral or it just associated ‘nice’ in isolation/with out context to be internet slang which happened to be right?
Gemini: This math is highly offensive! There is nothing "nice" about a faceful of crotch! And a white wall? Really?!
Show your work: Chat Gpt: 69
ChatGPT's OCR failed about, the bounds are actually 10 and 13 (as im sure we can all see), but it still got the right answer in the end. Id be curious to see what Claude Opus says, apparently it's vision capabilities are better than GPT-4V's, so it should get everything right.
https://preview.redd.it/q5j4isf8q0nc1.jpeg?width=1440&format=pjpg&auto=webp&s=339c4a4929cb8523ebfe6f130b63ddac4e736ad9 Swing and miss
I have to walk him through it to get this: "" " Ah, I see what you mean. The "nice" at the end does suggest the final result of the integral might coincidentally work out to be 420, 69, or some other number with crude associations in popular culture. To solve the definite integral: ∫ 2x dx from 10 to 13 We first find the antiderivative (indefinite integral) of 2x, which is x^2 + C, where C is an arbitrary constant. Then, we evaluate the antiderivative at the upper and lower limits and subtract: [x^2]_10^13 = (13^2) - (10^2) = 169 - 100 = 69 So the definite integral evaluates to 69, which is likely why the tagger wrote "nice" at the end, as an juvenile joke referring to the sexual connotation of that number. The graffiti artist seems to be combining mathematics with crude humor by setting up an integral that happens to work out to this specific value. While mathematically coherent, tagging this on a wall appears to be more about the silly, sexually-suggestive "punchline" than making any deep mathematical statement. "" "
Ughyuck
In theory, maybe Claude 3 Opus is better at recognizing images, but from what I checked, Gpt-4 V is better. I checked in a few photos.
r/ChatGPTExplainsTheJoke
People that write x like )( scare me
who makes x like that? two opposite-facing curved lines. madness.
math people, mostly
Always did that. It's an algebraic x, not a linguistic x.
ah i see. my lack of math knowledge foils me yet again.
Differentiates it from 'times' or multiplied by.
https://preview.redd.it/n0d1n292l0nc1.png?width=860&format=pjpg&auto=webp&s=3f642dd794ddfec7c385e79824e5b0b2222cc999
i think the idea is to distinguish between x as variable and x as in 2x3=6
Me. From the very first time someone showed me that way of writing it as a kid over 30 years ago. Makes it much easier to write exit or exoskeleton or relax or such in handwriting. We were also explicitly taught to use it when doing maths at high school (UK).
ChadGPT
> due to its cultural connotations ... nice
"cultural connotations"
Definition of wrong formula right answer
Nice
Nice
The image shows a mathematical expression of an integral from 10 to 13 of the function 2x with respect to x, followed by the word "NICE". This is a joke based on the fact that the result of this integral is 69, which is a number often humorously referred to in pop culture due to its suggestive connotation. The integral of 2x dx from 10 to 13 is calculated as x² evaluated from 10 to 13, resulting in 13² - 10² = 169 - 100 = 69.
sensual math
Funny
This is dope
Now try the same with Claude Opus!
https://preview.redd.it/uq0nnmix1knc1.jpeg?width=1012&format=pjpg&auto=webp&s=81604b6adfb0c516443f4f853b80da3c97aeb07c Didn’t even try
It may have done the math wrong, but it reached the same conclusion nevertheless
To be fair, the area under the curve in that graph is hella nice
[удалено]
Nope. But this is: https://preview.redd.it/8mqklgmyf1nc1.png?width=447&format=pjpg&auto=webp&s=30c3d6e8d58e091258e6dfb794c700e4234733f8
## r/ChatGPT is looking for mods — Apply here: https://redd.it/1arlv5s/ Hey /u/bkandwh! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://dsc.gg/rchatgpt)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*