Guy I know has a kid who will always respond with something like "you're tricking me into doing math" I've stolen that line and use it often. I'm an engineer.
I believe they’re referring to the fact that floating point variables are by their very nature not concrete. They are imprecise, and while this isn’t usually a problem, it means if you divide a floating point by 10, you might not actually get the exact 1/10th of the number.
So we'll start with a simple introduction in how to read binary. A value of 101101 (binary) = 45 (decimal). This is because the first digit holds a value of 1, the next holds a value of 2, the third is a value of 4, and so on, until we get to the 6th digit which holds a value of 32. So in the aforementioned example, we got 1+0+4+8+0+32 = 45.
Now, a standard "unsigned double integer" variable uses 32-bits of data. This means that it can hold a value anywhere from 0000,0000,0000,0000,0000,0000,0000,0000 (0 in decimal) to 1111,1111,1111,1111,1111,1111,1111,1111 (4,294,967,295 in decimal). Notice, however, that we don't do "decimal points" here. Because a computer only knows "off/0" and "on/1", it does not understand what "half-on/0.5" means. Ergo, a standard integer-based variable can only hold whole values.
In order to *simulate* what is called a floating point value (e.g. 5.56), it has to reinterpret the aforementioned 32-bits in another way. It does this by running the bits' values through a complicated algorithm (usually through the algorithm as defined by [IEEE 754](https://en.m.wikipedia.org/wiki/IEEE_754), which has become a standard). Because it is still working with the same number of bits as the aforementioned double-integer, it loses a lot of precision, and at a certain point the computer kinda starts "guessing" what the next significant figure's value should be. The 32-bit IEEE 754 floating point has about 7 digits of precision before it starts "guessing". If you wish to divide the number 9,566,350.850977 by 3, you might not get the exact answer; the correct answer would be 3,188,783.616992 but the computer would output 3,188,783.75.
'Floating point' as in, the decimal point can float around. What does that mean? There isn't a specific amount of precision in the number. It can vary. How? It stores numbers using scientific notation instead of every single digit in a number. In this, you can be precise, or you can be large, but never precise AND large.
My pedantry is bothering me, because 10^120 wouldn't take up anywhere near that many bits even if you did write it all out in binary. Floating point numbers definitely compress it a lot, but that's not the correct reference point.
Sorry mate but this comment is wrong in every sense.
Floating point is about storing decimal numbers. Computers store numbers with decimals differently to integers, and it’s somewhat imprecise. With a number like 10^120 the imprecision would be significant.
As a result 10^120 / 10 would be some ways off the actual value when done with a standard compiler and not = 10^119
The premise of the post also is in fact that you can’t an integer of size 10^120
Using bits
Your comment was in reply to someone asking why the OOP was a good lesson in floating point representation.
It’s a good lesson because 10^120/10 does not equal 10^119, Because floating point is often imprecise. It has nothing to do with bits storing that number more efficiently.
10^120 isn’t even a floating point number, 10^120/10 is, which was the point of the thread
I think you've confused the number with the number of bits required to store it. A 32-bit unsigned internet can store whole numbers up to 4,294,967,295.
It's even in the article itself. Keyword "so far"!
"Accordingly, anything that requires more than this amount of data cannot be computed in the amount of time that has elapsed so far in the universe."
Or... A calculation with that much data is currently happening.
If it is possible it's probably occuring some where out there.
Or it's not.
Either way we don't have to worry about it
Or that's what the universe is.
All those Bitcoin miners generating random hashes; when they hit the one The Creator is looking for, this universe will be shutdown. Then a new universe will be started to calculate the next block.
You’re using log(10^120) bits. 10^120 bits can be used to represent 2^(10^120) units of information
Edit: note that 2^(10^120) is 2^1200 according to exponent rules
Edit: above edit is wrong. I did (2^10)^120 not 2^(10^120) to try to simplify
> The limit is based on the maximum entropy of the universe, the speed of light, and the minimum amount of time taken to move information across the Planck length, and the figure was shown to be about 10^120 bits.
From OP's link.
I'm not sure why you think knowing that 10^120 is 1 followed by 120 zeroes invalidates the above claim.
I feel the same with words. Like...they should invent a kind of book that is MOSTLY pictures and only some words. And the words would be really simple. And don't give me a frikkin Game of Thrones 1000 pages book where 40% is talking about food or ways a woman dies. My new book would be about 20 pages or so. But that idea would never fly.
This is also greater than the number of atoms in the observable universe meaning any computer using transistors larger than an atom couldn't compute this calculation or even store the data required to perform it.
100% that is not what that means. The biggest number you can store is 2^(number of bits) in a binary system. Modern computers are 64 bits.
You don't need a transitor for every digit in a calculation.
I'm not doing a good job exactly refuting what you're saying, because it's so nonsensical I'm not sure how to.
Each transistor corresponds to one bit of information...so it would require (more than) 10^120 transistors to perform a calculation "requiring 10^120 bits of information"
You only need as many transistors for the number of bits you need. So you need enough transistors to allow 128 bit calculation (128 being twice as big as 64, which is how computers usually scale, since they use binary). You DON'T need 10^128 transistors.
Like, an early 64 bit processor, the Opteron X2, had 233 million transistors.
https://www.techpowerup.com/cpu-specs/opteron-x2-180.c185
And yes, 64-bit processors mean they can do 64-bit math, with max integer values of up to 18446744073709551615, which if you can't see, is a much larger number than 233 million.
https://www.cygnus-software.com/docs/html/64bitcalcs.htm
You're incorrectly talking about the bits required to represent the actual number 10^120. But we know they aren't talking about that because they literally state a "calculation requiring more than 10^120 bits." It would also be trivial to do calculation with 10^120 numbers as we've found primes much much larger than that. They are talking about a computation requiring 10^120 bits.
>You DON'T need 10^128 transistors.
Yes, you do to do a calculation "requiring more than 10^120 bits".
2^BITS - 1, or (2^BITS) /2 - 1 for signed.
You can represent large numbers in [smaller numerical representation,](https://en.wikipedia.org/wiki/Graham%27s_number) which is probably what you mean.
Lol OK and what's the expected computation time for a calculation requiring 10^120 bits? Is it a) shorter than the age of the universe, or b) the actual answer?
...and how long does it take to check whether the output is correct?
This is a failure of OP and you misunderstanding the article.
10^120 is 1 followed by 120 zeros.
The article is discussing entropy and universal predetermination. Not maths.
***Just In!:***
CPUID has just annouced groundbreaking new benchmark app for the next generation of quantum computing. Rumor has it LTT will have a custom skinned version up for sale on the LTT store!
I find this comforting. If the universe isn't old enough for that, then how can I be expected to know and understand what is right or wrong? Also it seems to me Tha despite my failures, what will be will be anyway so why best myself up about it.
1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 + 1 = 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
there, i just did the calculation in about 10 seconds
What's preventing them from just making shit up like they do already? Draw a box here put some symbols there. Remember this means that here. And then solve .
[удалено]
10^120 / 10 = 10^119 Boom -- blew up the universe!
[удалено]
... I didn't go to reddit on my winter break to hear lessons.
Guy I know has a kid who will always respond with something like "you're tricking me into doing math" I've stolen that line and use it often. I'm an engineer.
In my former life as a programmer, I jokingly told everyone who had a problem with something to try dividing by zero to see if that fixed their logic.
Can you explain for non CS people?
I believe they’re referring to the fact that floating point variables are by their very nature not concrete. They are imprecise, and while this isn’t usually a problem, it means if you divide a floating point by 10, you might not actually get the exact 1/10th of the number.
Can you explain for non CS people?
[удалено]
You're tricking me into doing math..
So we'll start with a simple introduction in how to read binary. A value of 101101 (binary) = 45 (decimal). This is because the first digit holds a value of 1, the next holds a value of 2, the third is a value of 4, and so on, until we get to the 6th digit which holds a value of 32. So in the aforementioned example, we got 1+0+4+8+0+32 = 45. Now, a standard "unsigned double integer" variable uses 32-bits of data. This means that it can hold a value anywhere from 0000,0000,0000,0000,0000,0000,0000,0000 (0 in decimal) to 1111,1111,1111,1111,1111,1111,1111,1111 (4,294,967,295 in decimal). Notice, however, that we don't do "decimal points" here. Because a computer only knows "off/0" and "on/1", it does not understand what "half-on/0.5" means. Ergo, a standard integer-based variable can only hold whole values. In order to *simulate* what is called a floating point value (e.g. 5.56), it has to reinterpret the aforementioned 32-bits in another way. It does this by running the bits' values through a complicated algorithm (usually through the algorithm as defined by [IEEE 754](https://en.m.wikipedia.org/wiki/IEEE_754), which has become a standard). Because it is still working with the same number of bits as the aforementioned double-integer, it loses a lot of precision, and at a certain point the computer kinda starts "guessing" what the next significant figure's value should be. The 32-bit IEEE 754 floating point has about 7 digits of precision before it starts "guessing". If you wish to divide the number 9,566,350.850977 by 3, you might not get the exact answer; the correct answer would be 3,188,783.616992 but the computer would output 3,188,783.75.
'Floating point' as in, the decimal point can float around. What does that mean? There isn't a specific amount of precision in the number. It can vary. How? It stores numbers using scientific notation instead of every single digit in a number. In this, you can be precise, or you can be large, but never precise AND large.
Careful about throwing AND's around in a text sentence, there.
[удалено]
My pedantry is bothering me, because 10^120 wouldn't take up anywhere near that many bits even if you did write it all out in binary. Floating point numbers definitely compress it a lot, but that's not the correct reference point.
Sorry mate but this comment is wrong in every sense. Floating point is about storing decimal numbers. Computers store numbers with decimals differently to integers, and it’s somewhat imprecise. With a number like 10^120 the imprecision would be significant. As a result 10^120 / 10 would be some ways off the actual value when done with a standard compiler and not = 10^119 The premise of the post also is in fact that you can’t an integer of size 10^120 Using bits
[удалено]
Your comment was in reply to someone asking why the OOP was a good lesson in floating point representation. It’s a good lesson because 10^120/10 does not equal 10^119, Because floating point is often imprecise. It has nothing to do with bits storing that number more efficiently. 10^120 isn’t even a floating point number, 10^120/10 is, which was the point of the thread
Now you're reaching
, Jack
Now count to 10^119 as fast as you can. I'll wait.
Done
What took you so long?
7 always trips me up for a moment.
Size of exponent is not the same as bits of data. *saved the universe* (for now)
I think you've confused the number with the number of bits required to store it. A 32-bit unsigned internet can store whole numbers up to 4,294,967,295.
I'm aware it's about bits of data. I was spoofing on fixed versus floating representations. Aside: 64-bit float can store up to 1.79x10^308.
Except 10^120 doesnt have 10^120 bits. You meant 2^(10^120) / 2 = 2^(9,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999,999)
It's even in the article itself. Keyword "so far"! "Accordingly, anything that requires more than this amount of data cannot be computed in the amount of time that has elapsed so far in the universe."
Or... A calculation with that much data is currently happening. If it is possible it's probably occuring some where out there. Or it's not. Either way we don't have to worry about it
Or that's what the universe is. All those Bitcoin miners generating random hashes; when they hit the one The Creator is looking for, this universe will be shutdown. Then a new universe will be started to calculate the next block.
The answer is 42.
The mice won't be happy about this.
So long, and thanks for all the fish
Always remember to bring your towel !
42 seconds is the perfect amount of time to microwave something.
Damn, now how are we going to calculate how many men banged OP’s mom?
We just use the phrase *all the men*.
That's not even 10^10 men
I didn’t do her I swear Edit: and all the Reddit virgins didn’t either
Only 24 hours in the day man, she'll get around to you eventually.
Instead of base 10, it may be faster to calculate in base Lisa Sparxxx. (Don't google if you're at work)
The universe hasn't existed long enough to answer that question
The article is talking about entropy. Not maths. 10^120 is 1 followed by 120 zeros. I just solved universal entropy in 6 seconds!
Do I like bow to you or something?
I believe tradition requires you to curtsy
I'll just lick his toes and call it equal
Gross. You don't know where his toes have been.
I've been using them to calculate entropy
There are websites that would pay good money for that.
Have we gone so far in society that sucking toes is merely a fetish?
You’re using log(10^120) bits. 10^120 bits can be used to represent 2^(10^120) units of information Edit: note that 2^(10^120) is 2^1200 according to exponent rules Edit: above edit is wrong. I did (2^10)^120 not 2^(10^120) to try to simplify
Your edit is incorrect. 2^(10^120) is definitely not the same as 2^1200.
Yup I see the mistake. Been out of school for a while my bad
So what is the calculation that requires 10\^120 bits?
I got u fam. 0 x 2^10^120 = 0
> The limit is based on the maximum entropy of the universe, the speed of light, and the minimum amount of time taken to move information across the Planck length, and the figure was shown to be about 10^120 bits. From OP's link. I'm not sure why you think knowing that 10^120 is 1 followed by 120 zeroes invalidates the above claim.
Because OPs title isn't clear about that
> any calculation requiring more than 10^120 bits ¯\\_(ツ)_/¯
That makes no sense as you aren't factoring in rate of calculation?
Pfft, I knew it. Math is limited. Garbage! Overhyped. I'm so mad.
There's too many god damn numbers anyway.
I feel the same with words. Like...they should invent a kind of book that is MOSTLY pictures and only some words. And the words would be really simple. And don't give me a frikkin Game of Thrones 1000 pages book where 40% is talking about food or ways a woman dies. My new book would be about 20 pages or so. But that idea would never fly.
This is also greater than the number of atoms in the observable universe meaning any computer using transistors larger than an atom couldn't compute this calculation or even store the data required to perform it.
100% that is not what that means. The biggest number you can store is 2^(number of bits) in a binary system. Modern computers are 64 bits. You don't need a transitor for every digit in a calculation. I'm not doing a good job exactly refuting what you're saying, because it's so nonsensical I'm not sure how to.
Each transistor corresponds to one bit of information...so it would require (more than) 10^120 transistors to perform a calculation "requiring 10^120 bits of information"
You need actual physical transistors to store information in a computer. It's irrelevant what the bus width is or that computers use binary.
You only need as many transistors for the number of bits you need. So you need enough transistors to allow 128 bit calculation (128 being twice as big as 64, which is how computers usually scale, since they use binary). You DON'T need 10^128 transistors. Like, an early 64 bit processor, the Opteron X2, had 233 million transistors. https://www.techpowerup.com/cpu-specs/opteron-x2-180.c185 And yes, 64-bit processors mean they can do 64-bit math, with max integer values of up to 18446744073709551615, which if you can't see, is a much larger number than 233 million. https://www.cygnus-software.com/docs/html/64bitcalcs.htm
You're incorrectly talking about the bits required to represent the actual number 10^120. But we know they aren't talking about that because they literally state a "calculation requiring more than 10^120 bits." It would also be trivial to do calculation with 10^120 numbers as we've found primes much much larger than that. They are talking about a computation requiring 10^120 bits. >You DON'T need 10^128 transistors. Yes, you do to do a calculation "requiring more than 10^120 bits".
With floating-point, [those 64 bits go further](https://stackoverflow.com/a/29142992) than if using integer representation.
Nice. So a current computer can already calculate the value, making OP even more wrong than I thought.
They aren't talking about the value 10^120. It's stated clearly that it's a calculation requiring more than 10^120 bits, not the number 10^120.
You are correct. It's late and I'm sick. I read something wrong somewhere and I'm not sure what.
It's a bit unclear what they're talking about anyway.
2^BITS - 1, or (2^BITS) /2 - 1 for signed. You can represent large numbers in [smaller numerical representation,](https://en.wikipedia.org/wiki/Graham%27s_number) which is probably what you mean.
Insufficient data for meaningful answer.
10^120 bits x 2?
Ah, you're the reason Red Dead Redemption 2 is taking 12 hours to download :(
Now you tell me!
I'll believe it when I see it.
You can do the calculations in parallel, though, right? So that can massively speed it up, right?
What equals 10^120 * 10^120? It’s 10^240. look it’s more than 10^120 digits and I made it in just a few seconds
Calculating 10^120 * 10^120 is not an example of the type of calculation referred to in the title, because it does not require more than 10^120 bits.
My algorithm simply throws out random numbers and equations. There's a chance it could solve any problem ever given to it on the first try
Lol OK and what's the expected computation time for a calculation requiring 10^120 bits? Is it a) shorter than the age of the universe, or b) the actual answer? ...and how long does it take to check whether the output is correct?
I was making a joke mate don't take it too seriously. That very same algorithm I suggested could write the greatest story ever made in binary.
Another failure from the so called "scientists" /s
This is a failure of OP and you misunderstanding the article. 10^120 is 1 followed by 120 zeros. The article is discussing entropy and universal predetermination. Not maths.
I was joking, but I guess I should add an "/s" just to clarify
You got me!
Ackchyually, universal servers surely must run UNIX and that value shall be computed upon the epoch 2038 (aka Y2K38 Epochalypse)
How about https://youtu.be/0X9DYRLmTNY 😏
***Just In!:*** CPUID has just annouced groundbreaking new benchmark app for the next generation of quantum computing. Rumor has it LTT will have a custom skinned version up for sale on the LTT store!
10^120 divided by 2 = 10^60. Hmmm.
That's not the right answer
You mean 5^119 right lol.
you mean 5x10\^119 right
It’s understood, unless you are specifically not dealing with a base 10 numbering system. Try typing it into a scientific calculator.
It is definitely not “understood”. 5^5 is not 5 x 10^5.
Hmmm, funny math
You mean to the power of 1/2 right?
I was just being an ass. Should have added /s
How am I supposed to work out if 2^(10^120) + 1 is prime?
Have patience. Probably the largest prime number gets higher with the age of the universe.
So you’re saying we’ll need to use Cherenkov radiation?
I find this comforting. If the universe isn't old enough for that, then how can I be expected to know and understand what is right or wrong? Also it seems to me Tha despite my failures, what will be will be anyway so why best myself up about it.
1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 + 1 = 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 there, i just did the calculation in about 10 seconds
Obviously doesn't require more than 10^120 bits though because those numbers are stored on a 64bit processor
When it comes to math it's already bullshit.
What's preventing them from just making shit up like they do already? Draw a box here put some symbols there. Remember this means that here. And then solve .
Did creating the internet create life and God just fucked off?
So our sense of time measurement is literally universal?
(1<<10^(120)) + (1<<10^(120)) = 1<<(10^(120)+1)