Precision loss is one thing, but precision gain???
I have a text file w/ the following coordinates:
41.88694340165634 -87.60841369628906
When I paste this into SQL Server Mgmt Studio table view, it results in this:
41.886943401656339 -87.608413696289062
Am I dreaming? How is this possible?
I'm pasting from notepad, and it's raw text. Same problem if I type the characters directly.
Where does sql server get the extra precision from?
It's not adding precision, it's just rounding it to the nearest IEEE floating point representation. When you convert that back to decimal, it only LOOKS like it gained precision.
According to Books-On-Line:
Float: Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.
Emphasis mine.
I haven't personally seen this, but it might just be that the SQL server management studio shows the "representation" of the float i.e. how the value would be stored in the db. Remember that floats are all approximate anyway.
Related
I have a SQL table which has two columns Latitude and Longitude with values 29.47731 and -98.46272 respectively. Column datatype is float in SQL server.
When we retrieve the record using EF core like _context.Table.ToListAsync(), it retrieves record but when it is converted to c# equivalent double? datatype, it adds extra digits like -98.4627199999999.
How can i avoid this? It should return same value as there in database.
SQL Float and C# Double are imprecise data types, they store very close approximations of the number, but often not the exact number. If you need to maintain the exact values, you should use SQL Numeric data type which will allow you to specify the number of significant digits; and while the C# Decimal data type is still imprecise, it will provide a higher precision approximation than Double does.
I you find out that your data isn't stable and you are dealing with decimal values, it is propably a 'precise notation' issue.
Try taking a look at these:
SQL:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver15
C#
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
Those should guide you trough using the most precise notation for your situation :-)
I had a table with two columns for coordinates stored in. These columns were REAL datatype, and I noticed that from my application it was only showing 5 decimals for coordinates, and positions were not accurate enough.
I decided to change datatype to FLOAT, so I could use more decimals. It was for my pleasant surprise that when I changed the column data type, the decimals suddenly appeared without me having to store all the coordinates again.
Anyone can tell me why this happens? What happens with the decimal precision on REAL datatype?. IsnĀ“t the data rounded and truncated when inserted? Why when I changed the datatype the precision came up with no loss of data?..
You want to use a Decimal data-type.
Floating point values are caluclated by a value and an exponenent. This allows you have store huge number representations in small amounts of memory. This also means that you don't always get exactly the number you're looking for, just very very close. This is why when you compare floating point values, you compare them within a certain tolerance.
It was for my pleasant surprise that when I changed the column data type, the decimals suddenly appeared without me having to store all the coordinates again.
Be careful, this doesn't mean that the value that was filled in is the accurate value of what you're looking for. If you truncated your original calculation, you need to get those numbers again without cutting off any precision. The values that it autofills when you convert from Real to Float aren't the rest of what you truncated, they are entirely new values which result from adding more precision to the calculation used to populate your Real value.
Here is a good thread that explains the difference in data-types in SQL:
Difference between numeric, float and decimal in SQL Server
Another helpful link:
Bad habits to kick : choosing the wrong data type
I have the value 1555.4899999999998 stored in a float column with default precision (53). When I do a simple select, SSMS rounds the output instead of printing it with all available precision. It's caused some gotchas for me recently since the value printed doesn't work as a float literal to match the actual stored value.
For example (note that both of these numbers have an exact representation in the default float),
declare #f1 float;
declare #f2 float;
set #f1 = 1555.49;
set #f2 = 1555.4899999999998;
select #f1, #f2;
select STR(#f1,30,15), STR(#f2,30,15);
Outputs:
1555.49 1555.49
1555.490000000000000 1555.489999999999800
In Query Analyzer, that first select outputs:
1555.49 1555.4899999999998
That's the behavior I want to get from Management Studio. Is there a way to prevent SSMS from rounding in its result display?
No.
SQL Server Management Studio rounds floating point values for display purposes; there is a Connect suggestion to change this behavior, but it is closed "as By Design". (Microsoft Connect, a public issue tracker for Microsoft software has been retired)
However, SQLCMD, osql and the Query Analyzer do not.
SQLCMD -E -S server -Q"SELECT CONVERT(FLOAT, 1555.4899999999998)"
Is there a reason you would rather use a float type than a decimal type? Floats are stored as fractions, which causes them to often be slightly innacurate when doing operations on them. This is okay when you have a graphics application where the innaccuracy is much less significant than the size of a pixel, but it's a huge issue in something like an accounting application where you're dealing with money.
I would venture to say that the accuracy of a decimal is more important to most applications than any benefit in speed or size they would get from using a float.
Edit to my original post. I was trying to solve a slightly different problem when I ran across this page. I just wanted a float type to return a "regular" number, not scientific notation.
Suggested two converts originally. Wound up with this in then end. The 7 below could be changed to show as many decimal places as you would like. Replace and trims get rid of leading and trailing zeroes.
REPLACE(RTRIM(LTRIM(REPLACE(STR(x, 20, 7),'0',' '))),' ','0')
I'm working on a SQL Server database and need to make sure the numeric types are large enough for the data.
There are two sources: Java double and Oracle float(126).
The SQL Server types are either numeric(30,10) or numeric(15,8).
I read that Java doubles can store values from -4.9E-324 to 1.7976931348623157E+308, Oracle float(126) has approximately 38 digits of precision, while SQL Server numeric(30,10) has only 30 digits of precision and allows 10 digits after the decimal point.
Am I correct in saying that numeric(30,10) is not safe for double and float(126) and could lead to loss of precision or overflow. Should these be changed to DOUBLE PRECISION?
SQL's NUMERIC is a decimal type. DOUBLE PRECISION is a binary float type, typically mapped to IEEE 754 douple precision (64 bits), which is exactly the format used by Java's double, so that should be a perfect match. FLOAT is also a binary type present in the SQL standard, so it should also be present on SQL Server, but its maximum precision is implementation dependant, and if it's smaller on SQL Server there isn't really anything you can do.
I am attempting to use Entity Framework and have a contact database that has Longitude and Latitude data from Google Maps.
The guide says that this should be stored as float.
I have created my POCO entity which includes Longitude as a float and Latitude as a float.
I have just noticed that in the database, these both come up as real.
There is still a lot of work to be done before I can get any data back from Google, I am very far away from testing and I was just wondering if anyone can tell me if this is going to be a problem later on?
Nope, that should be fine. Note that you may not get the exact same value back as you received from Google Maps, as that would have been expressed in decimal-formatted text. However, this is effectively a physical quantity, and thus more appropriate as a float/double/real/(whatever binary floating point type you like) than as a decimal. (Decimals are more appropriate for man-made quantities - particularly currency.)
If you look at the documentation for float and real you'll see that real is effectively float(24) - a 32-bit floating binary point type, just like float in C#.
EDIT: As noted in comments, if you want more than the significant 7-8 digits of accuracy provided by float, then you probably want double instead in C#, which would mean float(53) in SQL Server.
This link:
http://msdn.microsoft.com/en-us/library/aa258876(v=sql.80).aspx
explains that, in SQL Server, real is a synonym for float(24), using 4 bytes of data. In .NET a Single precision floating point number also uses 4 bytes, so these are pretty much equivalent:
http://msdn.microsoft.com/en-us/library/47zceaw7(v=vs.71).aspx