I am converting an sqlserver DB to PostgreSQL and I saw some column declared as Float in SQL Server. I am about to just declare that column float in PostgreSQL but I wonder if this conversion will be good enough. Should it be better to declare FLOAT(20) in PostgreSQL? or just FLOAT will be enough?
Both DBMS interpret FLOAT with no precision parameter as FLOAT(53) (a full-precision double precision float). Precision values of 24 or less may change the internal representation into a single-precision float, which seems to be your case given you have a FLOAT(20).
In both Postgres and MS SQL, the database will just use a column of type REAL (32 bits) or DOUBLE PRECISION (64 bits) internally. The precision parameter merely ensures that the mantissa of your float has a specific maximum precision (a longer mantissa gets truncated).
FLOAT(20) would be equivalent on both DBMS, and FLOAT would give you an extra 33 bits of precision, which you may or may not need/want. FLOAT/REAL/DOUBLE PRECISION columns are pretty uncommon in SQL, tough, most people will use a NUMERIC instead, and the Postgres documentation actually points this out. Given this is a migration, though, you might want to be conservative about this.
https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-FLOAT
https://learn.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql?view=sql-server-ver15
Related
I am working with a SQL Server database where I need to get data from a column with the type real into another column in another table with the type float. From my understanding, real is essentially just float with less precision (24) compared to float which by default has a precision of 53. Therefore, when casting from real to float I would expect to actually get more precision or at least not lose the precision of the source value. However, some precision actually seems to be lost when doing this:
Why does this happen and is there a way to at least keep the precision of the source values when doing this?
I am confused why SSMS rounds real values when displaying them but does not do the same for float values
The nearest single-precision floating point number (real) to 2.1 is something like 2.0999999. So it makes sense to display it as 2.1.
The nearest double-precision floating point number (float) to 2.1 is quite a long way from
2.09999990000000, which is approximately what you get when you convert 2.0999999 from real to float.
SSMS will display a floats closer to 2.1 as 2.1, eg
select cast(2.1 as float), cast(2.1 as float) - 0.000000000000001
is displayed as
---------------------- ----------------------
2.1 2.1
Here's a paper that reviews algorithms for this conversion and presents a new one: Printing Floating-Point Numbers Quickly and Accurately with
Integers
Just as addition to David Browne answer:
Looks like direct casting doesn't help you. You can get 'correct' (better) results, casting through the character type, like next:
select cast(cast(Valuef as nvarchar(20)) as float)
As an example, select cast(cast(cast(2.1 as real) as nvarchar(20)) as float) displays just 2.1.
I have a SQL table which has two columns Latitude and Longitude with values 29.47731 and -98.46272 respectively. Column datatype is float in SQL server.
When we retrieve the record using EF core like _context.Table.ToListAsync(), it retrieves record but when it is converted to c# equivalent double? datatype, it adds extra digits like -98.4627199999999.
How can i avoid this? It should return same value as there in database.
SQL Float and C# Double are imprecise data types, they store very close approximations of the number, but often not the exact number. If you need to maintain the exact values, you should use SQL Numeric data type which will allow you to specify the number of significant digits; and while the C# Decimal data type is still imprecise, it will provide a higher precision approximation than Double does.
I you find out that your data isn't stable and you are dealing with decimal values, it is propably a 'precise notation' issue.
Try taking a look at these:
SQL:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver15
C#
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
Those should guide you trough using the most precise notation for your situation :-)
I have a postgresql database and am using it for a project which handles monetary value. My question is:
How do I limit the number of decimal digits to two for a Double Precision or float8 type column in a postgresql table?
Simply use Decimal (Numeric) type instead, as documented in the manual, e.g. cost decimal(10,2).
The first number (10) defines the total length of the number (all digits, including those after the decimal point), while the second number (2) tells how many of them are after the decimal point. The above declaration gives you 8 digits in front of the point. Of course, you can increase that - the limitations of the Numeric type are much, much higher.
In my opinion there is no need to use floating point when you handle currencies or other monetary-related values. I have no idea about the performance comparison between the two types, but Decimal has one advantage - it is an exact number (which usually is not the case with floating point). I also suggest to read an interesting, but short discussion on the topic.
I'm working on a SQL Server database and need to make sure the numeric types are large enough for the data.
There are two sources: Java double and Oracle float(126).
The SQL Server types are either numeric(30,10) or numeric(15,8).
I read that Java doubles can store values from -4.9E-324 to 1.7976931348623157E+308, Oracle float(126) has approximately 38 digits of precision, while SQL Server numeric(30,10) has only 30 digits of precision and allows 10 digits after the decimal point.
Am I correct in saying that numeric(30,10) is not safe for double and float(126) and could lead to loss of precision or overflow. Should these be changed to DOUBLE PRECISION?
SQL's NUMERIC is a decimal type. DOUBLE PRECISION is a binary float type, typically mapped to IEEE 754 douple precision (64 bits), which is exactly the format used by Java's double, so that should be a perfect match. FLOAT is also a binary type present in the SQL standard, so it should also be present on SQL Server, but its maximum precision is implementation dependant, and if it's smaller on SQL Server there isn't really anything you can do.
Precision loss is one thing, but precision gain???
I have a text file w/ the following coordinates:
41.88694340165634 -87.60841369628906
When I paste this into SQL Server Mgmt Studio table view, it results in this:
41.886943401656339 -87.608413696289062
Am I dreaming? How is this possible?
I'm pasting from notepad, and it's raw text. Same problem if I type the characters directly.
Where does sql server get the extra precision from?
It's not adding precision, it's just rounding it to the nearest IEEE floating point representation. When you convert that back to decimal, it only LOOKS like it gained precision.
According to Books-On-Line:
Float: Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.
Emphasis mine.
I haven't personally seen this, but it might just be that the SQL server management studio shows the "representation" of the float i.e. how the value would be stored in the db. Remember that floats are all approximate anyway.