Tutorial :Why do double and float exist? [duplicate]



Question:

Duplicate

When should I use double instead of decimal?

.. and many more...

We use the C# and SQL Server decimal datatypes throughout our apps because of their accuracy. Never had any of those irritating problems where the total doesn't add up to the detail, etc.

I was wondering why double and float exist at all given their inaccuracy


Solution:1

Floating-point arithmetic arose because it is the only way to operate on a large range of non-integer numbers with a reasonable amount of hardware cost. Infinite-precision arithmetic is implemented in several languages (Python, LISP, etc..) and libraries (Java's BigNum, GMP, etc..), and is an alternative for folks who need more accuracy (e.g. the finance industry). For most of the rest of us, who deal with medium-size numbers, floats or certainly doubles are more than accurate enough. The two different floating-point datatypes (corresponding to IEEE 754 single- and double-precision, respectively) because a single-precision floating-point unit has much better area, power, and speed properties than a double-precision unit, and so hardware designers and programmers should make appropriate tradeoffs to exploit these different properties.


Solution:2

They are much faster than decimal, and very often you don't need the exact precision.


Solution:3

"decimal" is 128 bits, double is 64 bits and float is 32 bits. Back in the day, that used to matter.

Decimal is mostly for money transactions (to avoid rounding), the others are good enough for several things where 29 decimals of accuracy doesn't have any real world meaning.


Solution:4

The drawback to the decimal datatype is performance.

This post covers it pretty well:

Decimal vs Double Speed


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »