Signup/Sign In
Ask Question
Not satisfied by the Answer? Still looking for a better solution?

Difference between numeric, float and decimal in SQL Server

What are the differences between numeric, float and decimal datatypes and which should be used in which situations?

For any kind of financial transaction (e.g. for salary field), which one is preferred and why?
by

2 Answers

rahul07
The default maximum precision of numeric and decimal data types is 38. In Transact-SQL, numeric is functionally equivalent to the decimal data type. Use the decimal data type to store numbers with decimals when the data values must be stored exactly as specified.

The behavior of float and real follows the IEEE 754 specification on approximate numeric data types. Because of the approximate nature of the float and real data types, do not use these data types when exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. Instead, use the integer, decimal, money, or smallmoney data types. Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons.
sandhya6gczb
They Differ in Data Type Precedence
Decimal and Numeric are the same functionally but there is still data type precedence, which can be crucial in some cases.

SELECT SQL_VARIANT_PROPERTY(CAST(1 AS NUMERIC) + CAST(1 AS DECIMAL),'basetype')

The resulting data type is numeric because it takes data type precedence.

Login / Signup to Answer the Question.