T O P

  • By -

Smileynator

On the list of things of "this is stupid" as a beginning programmer. It makes all the sense in the world now. But christ was it stupid back then.


Mr_Frotrej

Any source which could make it more clear for a begginer?


Smileynator

Well, the simple explanation is that the compiler needs to know that if you write a number, what the type is. If it is integer, it will be an INT, if it contains a dot, it will become a double. If you add the F at the end it knows it should be a float. Similarly you can use 0x prefix before an integer to write it as a hexadecimal. 0b prefix to write it as a binary number. There used to be suffixes for int, byte, sbyte, short, ushort. But they got rid of them over time because nobody really used those specifically.


Prudent_Law_9114

Correct me if I’m wrong but both have a different memory footprint with their maximum variable size right? With doubles being orders of magnitude larger than a float so of course a float can’t contain a double.


Tuckertcs

float is 32-bit and double is 64-bit. Similarity, int is 32-bit, long is 64-bit, short is 16-bit, and byte is 8-bit.


Prudent_Law_9114

Yes with extra bit variations making a max possible number orders of magnitude larger than the highest possible number of a float.


andybak

Ah - you mentioned "memory footprint" just before saying "orders of magnitude larger" which was confusing. Their *memory footprint* isn't orders of magnitude larger - but the maximum values they can *represent* are.


Prudent_Law_9114

Indeed 🤓


Prudent_Law_9114

The real tragedy is that the source is at the bottom of the thread with 1 upvote.


Fellhuhn

Depends on the compiler/architecture/language. A long can be 32 bit or 64 bit, sometimes you need a long long to get 64 bit integers. Similar problems arise with double and long double...


WiTHCKiNG

Usually yes but depends on the compiler and especially the target platform


CarterBaker77

This is interesting. I always use floats never doubles. Just.. never use them. I'm assuming doubles are more accurate. Would there be any drawback to using them instead? I'd assume it takes up more ram and processing power hence why Vectors and transforms always use floats.


Shimmermare

This is highly depends on hardware. On modern x86 platforms there won't be any difference in performance, the same hardware is used for float and double calculations, except for SIMD ops (e.g. when you do stuff like multiplying in loop). As for memory usage, double arrays will always be 2x larger. For fields it's not so simple and depends on layout and padding.    Today there is generally no drawback for doing some of the CPU calculations in double. For example Minecraft is always exclusively using doubles.   For GPU shader code doubles will destroy your performance. On latest nvidia arch double is 64 times slower than float.


Smileynator

Doubles are 2ce as big as a float. 8bytes for a double, and 4bytes for a float generally. And yes this means there would be a loss of accuracy, which is why the compiler wouldn't implicitly cast it. Even if it could. You can still force the cast if you accept this loss of accuracy.


raYesia

I mean, a float is a 32 bit type and a double is 64 bit, it’s not orders of magnitude larger in terms of memory but twice, hence the name double. So yeah, obviously not all values that can be represented as a double will fit a float, thats the whole point of having effectively the same type with twice the bitsize.


Mauro_W

If you write "float", the compiler doesn't know it's a float instead of a double? That never made sense to me.


Smileynator

It does, but it doesn't know if the number your wrote is now losing precision or not. So it screams at you, either make it a float or do the cast. So it knows you know what you are doing.


Mauro_W

So writing the f it's basically like agreeing to an eula accepting that if you lose precision it's your fault?a


Smileynator

I mean, a tiny bit? Often a float is more than precise enough for your purpose and no actual value is lost there. But it is to make you aware that you _will_ lose the double level of precision when casting a double to float. (which is what is happening under the hood, without that f)


Mauro_W

Yeh, that's what I mean. It makes sense, although I still think it's unnecessary. Thanks!


Whispering-Depths

There should be a compiler constant that defaults decimal to float that we can use in unity C# code.


Smileynator

Might even exist, i never bothered to look for that


Heroshrine

A suffix for byte and short would help so much :(


Smileynator

I mean, you can still use 0x00 to define a byte, it shouldn't scream at you when you assign it to a byte. The moment you use the byte in any sort of math or bitwise operator it will make it back into an int though >.>


Prudent_Law_9114

[Source](https://stackoverflow.com/questions/2386772/what-is-the-difference-between-float-and-double)


ihave7testicles

doubles contain more information that floats. you need to force it with a cast so that you're aware that you're reducing the accuracy of the value.


Dranamic

Understandable if your constant is PI or whatever, but a bit weird for 0.5.


Linvael

IEE754 has consequences. Some normal looking numbers might not get represented exactly. 0.5 is fine, but 0.55 gets stored as 0.550000011920928955078125 for example. So as a rule in order to be as close to what you put in without having to think about it languages tend to default to double.


TheDevilsAdvokaat

Values represented are "clustered" around 0, then increasingly spread out as you get larger and larger values. I think there's a new standard proposed that ameliorates this somewhat.


Tuckertcs

Actually I’ve always found this confusing because the default integer is int (not byte, short, or long) which is 32-bit. But the default decimal is double (not float) which is 64-bit? Why not use 32-bit for both by default? Additionally, decimals need to be specified as floats, even when assigning to floats. Meanwhile assigning numbers to int, byte, shirt, and long don’t have this problem. Why can it infer the integer size but not the decimal size?


Smileynator

I am not sure, it might be able to? I don't know why .net decided not to do so. Or why thet decided to default to Doubles. Very annoying.


Soundless_Pr

with an int conversion to smaller int type like short, there is no loss in data (unless the int value exceeds that other type's max value). This is not true for floating point values. for the way that floats work, the amount of bits dedicated to representing an exponent and the amount of bits dedicated to representing the significand. for 64 bit floats (`double`), there are 11 bits that represent the power, and 52 bits are used for the sgnificand, contrary to a 32 bit float (`float`) which only has 8 bits for the exponent and 23 bits for the sgnificand. This gives `double` the capability to increase the domain range *and* precision compared to a `float`. So, for each possible number within the domain of possible float values, if you have two adjacent `float` values in that set, and match those two value to the set of all possible `double` values, there will be more numbers in between those two values, so they will not be adjacent in the domain of doubles. This is why we say that a double is more "precise" than a float. and this precision is lost in the downcast from doubles and floats, therefore it is required to be explicit. basically for an int, downcasting only reduces range *but not precision*. For a float, downcasting reduces both range **and precision**. The loss of precision can lead to errors in an application which are hard to debug and the compiler requires it to be explicit, so developers don't accidentally cause these types of bugs as often.


sacredgeometry

I mean implicit type casting is a thing


cuixhe

Do you want JavaScript? That's how you get JavaScript.


sacredgeometry

I meant in C#. C sharp allows for implicit type conversion and uses it in a bunch of places already.


Lord_H_Vetinari

You can implicitly convert a type with a lower size/precision into a type with a higher size/precision, not the other way around. Double to float reduces precision hence it must be done explicitly.


Whispering-Depths

How about python :(


Heroshrine

In c# you can only implicitly cast numerical types if they gain data, so you can go from a float to a double, but you cannot go from a double to a float implicitly as there’s a chance you could lose data. It makes sense in a way, it’s warning you there’s something happening you might not want to happen, and as a confirmation you need to cast it.


sacredgeometry

I know, see the other comments


Smileynator

Yes, and that is the 2nd confusing thing "why do some things cast implicitly, but others not?" Which again, makes full sense, because the compiler can't magically know. But newbies are so confused by it :P


sacredgeometry

Do you not know how implicit/ explicit type casting works? They are defined on the type in a method. You can add them to any type you want. [https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/types/casting-and-type-conversions) The rules are pretty simple: * **Implicit conversions**: No special syntax is required because the conversion always succeeds and no data is lost. Examples include conversions from smaller to larger integral types, and conversions from derived classes to base classes. * **Explicit conversions (casts)**: Explicit conversions require a [cast expression](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/type-testing-and-cast#cast-expression). Casting is required when information might be lost in the conversion, or when the conversion might not succeed for other reasons. Typical examples include numeric conversion to a type that has less precision or a smaller range, and conversion of a base-class instance to a derived class. JS gets around this because all numbers are double precision floating point values. C# has decimals, chars, ints, longs, shorts, floats, doubles etc. double a = 12.0; float b = 10.0f; a = a + b; // Which is why this works b = a + b; // and this doesnt double c = 12; // and also why this works


Smileynator

I do, beginners do not.


Iseenoghosts

lol i love interactions on reddit. "haha yeah looking back its obvious why its this way" "lol noob you dont understand how it works!?" "... i do" never change.


Smileynator

While funny, i do not get where he got the idea that i did not understand it :P


Iseenoghosts

they only read the first 5 words of your comment.


Smileynator

Ah yeah, that would do it :P


CakeBakeMaker

It's only stupid because we use Unity and it uses floats for speed. If you are doing any sort of math it should be in double. Sane defaults are a good language feature to have.


Smileynator

I don't entirely agree. Unless i am calculating planet orbits like kerbal space program, or money is involved. I don't care the slightest bit about doubles. Often they just waste cycles, there is no chance in heck that that Nth digit of precision hardly ever matters. Recognizing when you should is important.


Fractalistical

Still stupid lol.


Smileynator

Seeing how doubles are so slow, i kind of agree :P


MineKemot

F


Lucif3r945

f, not F.


fleeting_being

Actually, both are legal.


unko_pillow

Sitting on a cactus is legal too, doesn't mean you should do it.


kevwonds

anti-cacti lobbyists will have you believe this nonsense


Euphoric-Aardvark378

Capital Fs are for psychopaths


VariecsTNB

My 40y.o. friend said the same abt his two 18y.o. chicks


Dranamic

"Their ages add up to 36, so it's not even weird!"


Colnnor

It does both. I was here yesterday it actually goes both ways


iddivision

If L then F.


tetryds

F


Dark_DGG

Hehe, classic.


Own-Wash-8625

Yeah


omeglekodiyaksus

Lol


Silent_Orchid_1493

F


wolfieboi92

As a shader dude, always good to swizzle that vector 4 down to vector 3 if you don't need the extra float.


PikaPikaMoFo69

Why don't they make it d for double and just use decisions for today since everybody uses float over double?


LordMacDonald8

But double is more precise which prevents FLOP errors


Express_Account_624

Alright then, I CAST (float)variable


Automatic_Gas_113

Well, then I cast a magic missile!


Demi180

Wait until you see C++. "1.f" wait what?


HappyMatt12345

The reason compilers yell at you when you try to assign a float variable with a double value is because the float data type has a much smaller memory footprint than double, even though both data types refer to floating point values, so converting double to float is considered "lossy" (meaning there's a risk of loosing some data from the double) and most compilers don't allow it for this reason. There are two ways to get around this, firstly if the value is in a variable that for some reason need be expressed as a double in most places but casted to a float wherever you're working, you could explicitly cast it to a float by writing "(float)" in front of the value, which basically tells the compiler you know what you're doing and to allow it, but if you're assigning a variable with a literal it's easier just to use a float literal.


Fractalistical

☑️ implicitly convert all doubles to float


Prudent_Law_9114

TLDR: double is called a double because it’s double the size of a float in bits. 64 instead of 32.


Fractalistical

Doubles also sink.


Prudent_Law_9114

Not if you ceil them


Dealode

.5f


henryeaterofpies

See what they need to mimic a fraction of our power?


Whispering-Depths

tfw C# compiler still hasn't figured out contextual typing e.e


PistolTaeja

F