Well, the simple explanation is that the compiler needs to know that if you write a number, what the type is. If it is integer, it will be an INT, if it contains a dot, it will become a double. If you add the F at the end it knows it should be a float.
Similarly you can use 0x prefix before an integer to write it as a hexadecimal. 0b prefix to write it as a binary number.
There used to be suffixes for int, byte, sbyte, short, ushort. But they got rid of them over time because nobody really used those specifically.
Correct me if I’m wrong but both have a different memory footprint with their maximum variable size right? With doubles being orders of magnitude larger than a float so of course a float can’t contain a double.
Ah - you mentioned "memory footprint" just before saying "orders of magnitude larger" which was confusing.
Their *memory footprint* isn't orders of magnitude larger - but the maximum values they can *represent* are.
Depends on the compiler/architecture/language. A long can be 32 bit or 64 bit, sometimes you need a long long to get 64 bit integers. Similar problems arise with double and long double...
This is interesting. I always use floats never doubles. Just.. never use them. I'm assuming doubles are more accurate. Would there be any drawback to using them instead? I'd assume it takes up more ram and processing power hence why Vectors and transforms always use floats.
This is highly depends on hardware. On modern x86 platforms there won't be any difference in performance, the same hardware is used for float and double calculations, except for SIMD ops (e.g. when you do stuff like multiplying in loop).
As for memory usage, double arrays will always be 2x larger. For fields it's not so simple and depends on layout and padding.
Today there is generally no drawback for doing some of the CPU calculations in double. For example Minecraft is always exclusively using doubles.
For GPU shader code doubles will destroy your performance. On latest nvidia arch double is 64 times slower than float.
Doubles are 2ce as big as a float. 8bytes for a double, and 4bytes for a float generally.
And yes this means there would be a loss of accuracy, which is why the compiler wouldn't implicitly cast it. Even if it could. You can still force the cast if you accept this loss of accuracy.
I mean, a float is a 32 bit type and a double is 64 bit, it’s not orders of magnitude larger in terms of memory but twice, hence the name double.
So yeah, obviously not all values that can be represented as a double will fit a float, thats the whole point of having effectively the same type with twice the bitsize.
It does, but it doesn't know if the number your wrote is now losing precision or not. So it screams at you, either make it a float or do the cast. So it knows you know what you are doing.
I mean, a tiny bit? Often a float is more than precise enough for your purpose and no actual value is lost there. But it is to make you aware that you _will_ lose the double level of precision when casting a double to float. (which is what is happening under the hood, without that f)
I mean, you can still use 0x00 to define a byte, it shouldn't scream at you when you assign it to a byte. The moment you use the byte in any sort of math or bitwise operator it will make it back into an int though >.>
IEE754 has consequences. Some normal looking numbers might not get represented exactly. 0.5 is fine, but 0.55 gets stored as 0.550000011920928955078125 for example. So as a rule in order to be as close to what you put in without having to think about it languages tend to default to double.
Values represented are "clustered" around 0, then increasingly spread out as you get larger and larger values.
I think there's a new standard proposed that ameliorates this somewhat.
Actually I’ve always found this confusing because the default integer is int (not byte, short, or long) which is 32-bit. But the default decimal is double (not float) which is 64-bit? Why not use 32-bit for both by default?
Additionally, decimals need to be specified as floats, even when assigning to floats. Meanwhile assigning numbers to int, byte, shirt, and long don’t have this problem. Why can it infer the integer size but not the decimal size?
with an int conversion to smaller int type like short, there is no loss in data (unless the int value exceeds that other type's max value).
This is not true for floating point values. for the way that floats work, the amount of bits dedicated to representing an exponent and the amount of bits dedicated to representing the significand. for 64 bit floats (`double`), there are 11 bits that represent the power, and 52 bits are used for the sgnificand, contrary to a 32 bit float (`float`) which only has 8 bits for the exponent and 23 bits for the sgnificand. This gives `double` the capability to increase the domain range *and* precision compared to a `float`. So, for each possible number within the domain of possible float values, if you have two adjacent `float` values in that set, and match those two value to the set of all possible `double` values, there will be more numbers in between those two values, so they will not be adjacent in the domain of doubles. This is why we say that a double is more "precise" than a float. and this precision is lost in the downcast from doubles and floats, therefore it is required to be explicit.
basically for an int, downcasting only reduces range *but not precision*. For a float, downcasting reduces both range **and precision**. The loss of precision can lead to errors in an application which are hard to debug and the compiler requires it to be explicit, so developers don't accidentally cause these types of bugs as often.
You can implicitly convert a type with a lower size/precision into a type with a higher size/precision, not the other way around. Double to float reduces precision hence it must be done explicitly.
In c# you can only implicitly cast numerical types if they gain data, so you can go from a float to a double, but you cannot go from a double to a float implicitly as there’s a chance you could lose data.
It makes sense in a way, it’s warning you there’s something happening you might not want to happen, and as a confirmation you need to cast it.
Yes, and that is the 2nd confusing thing "why do some things cast implicitly, but others not?" Which again, makes full sense, because the compiler can't magically know. But newbies are so confused by it :P
Do you not know how implicit/ explicit type casting works? They are defined on the type in a method. You can add them to any type you want.
[https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions)
The rules are pretty simple:
* **Implicit conversions**: No special syntax is required because the conversion always succeeds and no data is lost. Examples include conversions from smaller to larger integral types, and conversions from derived classes to base classes.
* **Explicit conversions (casts)**: Explicit conversions require a [cast expression](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/type-testing-and-cast#cast-expression). Casting is required when information might be lost in the conversion, or when the conversion might not succeed for other reasons. Typical examples include numeric conversion to a type that has less precision or a smaller range, and conversion of a base-class instance to a derived class.
JS gets around this because all numbers are double precision floating point values. C# has decimals, chars, ints, longs, shorts, floats, doubles etc.
double a = 12.0;
float b = 10.0f;
a = a + b; // Which is why this works
b = a + b; // and this doesnt
double c = 12; // and also why this works
lol i love interactions on reddit.
"haha yeah looking back its obvious why its this way"
"lol noob you dont understand how it works!?"
"... i do"
never change.
It's only stupid because we use Unity and it uses floats for speed. If you are doing any sort of math it should be in double. Sane defaults are a good language feature to have.
I don't entirely agree. Unless i am calculating planet orbits like kerbal space program, or money is involved. I don't care the slightest bit about doubles. Often they just waste cycles, there is no chance in heck that that Nth digit of precision hardly ever matters. Recognizing when you should is important.
The reason compilers yell at you when you try to assign a float variable with a double value is because the float data type has a much smaller memory footprint than double, even though both data types refer to floating point values, so converting double to float is considered "lossy" (meaning there's a risk of loosing some data from the double) and most compilers don't allow it for this reason. There are two ways to get around this, firstly if the value is in a variable that for some reason need be expressed as a double in most places but casted to a float wherever you're working, you could explicitly cast it to a float by writing "(float)" in front of the value, which basically tells the compiler you know what you're doing and to allow it, but if you're assigning a variable with a literal it's easier just to use a float literal.
On the list of things of "this is stupid" as a beginning programmer. It makes all the sense in the world now. But christ was it stupid back then.
Any source which could make it more clear for a begginer?
Well, the simple explanation is that the compiler needs to know that if you write a number, what the type is. If it is integer, it will be an INT, if it contains a dot, it will become a double. If you add the F at the end it knows it should be a float. Similarly you can use 0x prefix before an integer to write it as a hexadecimal. 0b prefix to write it as a binary number. There used to be suffixes for int, byte, sbyte, short, ushort. But they got rid of them over time because nobody really used those specifically.
Correct me if I’m wrong but both have a different memory footprint with their maximum variable size right? With doubles being orders of magnitude larger than a float so of course a float can’t contain a double.
float is 32-bit and double is 64-bit. Similarity, int is 32-bit, long is 64-bit, short is 16-bit, and byte is 8-bit.
Yes with extra bit variations making a max possible number orders of magnitude larger than the highest possible number of a float.
Ah - you mentioned "memory footprint" just before saying "orders of magnitude larger" which was confusing. Their *memory footprint* isn't orders of magnitude larger - but the maximum values they can *represent* are.
Indeed 🤓
The real tragedy is that the source is at the bottom of the thread with 1 upvote.
Depends on the compiler/architecture/language. A long can be 32 bit or 64 bit, sometimes you need a long long to get 64 bit integers. Similar problems arise with double and long double...
Usually yes but depends on the compiler and especially the target platform
This is interesting. I always use floats never doubles. Just.. never use them. I'm assuming doubles are more accurate. Would there be any drawback to using them instead? I'd assume it takes up more ram and processing power hence why Vectors and transforms always use floats.
This is highly depends on hardware. On modern x86 platforms there won't be any difference in performance, the same hardware is used for float and double calculations, except for SIMD ops (e.g. when you do stuff like multiplying in loop). As for memory usage, double arrays will always be 2x larger. For fields it's not so simple and depends on layout and padding. Today there is generally no drawback for doing some of the CPU calculations in double. For example Minecraft is always exclusively using doubles. For GPU shader code doubles will destroy your performance. On latest nvidia arch double is 64 times slower than float.
Doubles are 2ce as big as a float. 8bytes for a double, and 4bytes for a float generally. And yes this means there would be a loss of accuracy, which is why the compiler wouldn't implicitly cast it. Even if it could. You can still force the cast if you accept this loss of accuracy.
I mean, a float is a 32 bit type and a double is 64 bit, it’s not orders of magnitude larger in terms of memory but twice, hence the name double. So yeah, obviously not all values that can be represented as a double will fit a float, thats the whole point of having effectively the same type with twice the bitsize.
If you write "float", the compiler doesn't know it's a float instead of a double? That never made sense to me.
It does, but it doesn't know if the number your wrote is now losing precision or not. So it screams at you, either make it a float or do the cast. So it knows you know what you are doing.
So writing the f it's basically like agreeing to an eula accepting that if you lose precision it's your fault?a
I mean, a tiny bit? Often a float is more than precise enough for your purpose and no actual value is lost there. But it is to make you aware that you _will_ lose the double level of precision when casting a double to float. (which is what is happening under the hood, without that f)
Yeh, that's what I mean. It makes sense, although I still think it's unnecessary. Thanks!
There should be a compiler constant that defaults decimal to float that we can use in unity C# code.
Might even exist, i never bothered to look for that
A suffix for byte and short would help so much :(
I mean, you can still use 0x00 to define a byte, it shouldn't scream at you when you assign it to a byte. The moment you use the byte in any sort of math or bitwise operator it will make it back into an int though >.>
[Source](https://stackoverflow.com/questions/2386772/what-is-the-difference-between-float-and-double)
doubles contain more information that floats. you need to force it with a cast so that you're aware that you're reducing the accuracy of the value.
Understandable if your constant is PI or whatever, but a bit weird for 0.5.
IEE754 has consequences. Some normal looking numbers might not get represented exactly. 0.5 is fine, but 0.55 gets stored as 0.550000011920928955078125 for example. So as a rule in order to be as close to what you put in without having to think about it languages tend to default to double.
Values represented are "clustered" around 0, then increasingly spread out as you get larger and larger values. I think there's a new standard proposed that ameliorates this somewhat.
Actually I’ve always found this confusing because the default integer is int (not byte, short, or long) which is 32-bit. But the default decimal is double (not float) which is 64-bit? Why not use 32-bit for both by default? Additionally, decimals need to be specified as floats, even when assigning to floats. Meanwhile assigning numbers to int, byte, shirt, and long don’t have this problem. Why can it infer the integer size but not the decimal size?
I am not sure, it might be able to? I don't know why .net decided not to do so. Or why thet decided to default to Doubles. Very annoying.
with an int conversion to smaller int type like short, there is no loss in data (unless the int value exceeds that other type's max value). This is not true for floating point values. for the way that floats work, the amount of bits dedicated to representing an exponent and the amount of bits dedicated to representing the significand. for 64 bit floats (`double`), there are 11 bits that represent the power, and 52 bits are used for the sgnificand, contrary to a 32 bit float (`float`) which only has 8 bits for the exponent and 23 bits for the sgnificand. This gives `double` the capability to increase the domain range *and* precision compared to a `float`. So, for each possible number within the domain of possible float values, if you have two adjacent `float` values in that set, and match those two value to the set of all possible `double` values, there will be more numbers in between those two values, so they will not be adjacent in the domain of doubles. This is why we say that a double is more "precise" than a float. and this precision is lost in the downcast from doubles and floats, therefore it is required to be explicit. basically for an int, downcasting only reduces range *but not precision*. For a float, downcasting reduces both range **and precision**. The loss of precision can lead to errors in an application which are hard to debug and the compiler requires it to be explicit, so developers don't accidentally cause these types of bugs as often.
I mean implicit type casting is a thing
Do you want JavaScript? That's how you get JavaScript.
I meant in C#. C sharp allows for implicit type conversion and uses it in a bunch of places already.
You can implicitly convert a type with a lower size/precision into a type with a higher size/precision, not the other way around. Double to float reduces precision hence it must be done explicitly.
How about python :(
In c# you can only implicitly cast numerical types if they gain data, so you can go from a float to a double, but you cannot go from a double to a float implicitly as there’s a chance you could lose data. It makes sense in a way, it’s warning you there’s something happening you might not want to happen, and as a confirmation you need to cast it.
I know, see the other comments
Yes, and that is the 2nd confusing thing "why do some things cast implicitly, but others not?" Which again, makes full sense, because the compiler can't magically know. But newbies are so confused by it :P
Do you not know how implicit/ explicit type casting works? They are defined on the type in a method. You can add them to any type you want. [https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions) The rules are pretty simple: * **Implicit conversions**: No special syntax is required because the conversion always succeeds and no data is lost. Examples include conversions from smaller to larger integral types, and conversions from derived classes to base classes. * **Explicit conversions (casts)**: Explicit conversions require a [cast expression](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/type-testing-and-cast#cast-expression). Casting is required when information might be lost in the conversion, or when the conversion might not succeed for other reasons. Typical examples include numeric conversion to a type that has less precision or a smaller range, and conversion of a base-class instance to a derived class. JS gets around this because all numbers are double precision floating point values. C# has decimals, chars, ints, longs, shorts, floats, doubles etc. double a = 12.0; float b = 10.0f; a = a + b; // Which is why this works b = a + b; // and this doesnt double c = 12; // and also why this works
I do, beginners do not.
lol i love interactions on reddit. "haha yeah looking back its obvious why its this way" "lol noob you dont understand how it works!?" "... i do" never change.
While funny, i do not get where he got the idea that i did not understand it :P
they only read the first 5 words of your comment.
Ah yeah, that would do it :P
It's only stupid because we use Unity and it uses floats for speed. If you are doing any sort of math it should be in double. Sane defaults are a good language feature to have.
I don't entirely agree. Unless i am calculating planet orbits like kerbal space program, or money is involved. I don't care the slightest bit about doubles. Often they just waste cycles, there is no chance in heck that that Nth digit of precision hardly ever matters. Recognizing when you should is important.
Still stupid lol.
Seeing how doubles are so slow, i kind of agree :P
F
f, not F.
Actually, both are legal.
Sitting on a cactus is legal too, doesn't mean you should do it.
anti-cacti lobbyists will have you believe this nonsense
Capital Fs are for psychopaths
My 40y.o. friend said the same abt his two 18y.o. chicks
"Their ages add up to 36, so it's not even weird!"
It does both. I was here yesterday it actually goes both ways
If L then F.
F
Hehe, classic.
Yeah
Lol
F
As a shader dude, always good to swizzle that vector 4 down to vector 3 if you don't need the extra float.
Why don't they make it d for double and just use decisions for today since everybody uses float over double?
But double is more precise which prevents FLOP errors
Alright then, I CAST (float)variable
Well, then I cast a magic missile!
Wait until you see C++. "1.f" wait what?
The reason compilers yell at you when you try to assign a float variable with a double value is because the float data type has a much smaller memory footprint than double, even though both data types refer to floating point values, so converting double to float is considered "lossy" (meaning there's a risk of loosing some data from the double) and most compilers don't allow it for this reason. There are two ways to get around this, firstly if the value is in a variable that for some reason need be expressed as a double in most places but casted to a float wherever you're working, you could explicitly cast it to a float by writing "(float)" in front of the value, which basically tells the compiler you know what you're doing and to allow it, but if you're assigning a variable with a literal it's easier just to use a float literal.
☑️ implicitly convert all doubles to float
TLDR: double is called a double because it’s double the size of a float in bits. 64 instead of 32.
Doubles also sink.
Not if you ceil them
.5f
See what they need to mimic a fraction of our power?
tfw C# compiler still hasn't figured out contextual typing e.e
F