I am working with MongoDB v3.6.3.
I have seen a similar question that recieved a good answer. So why am I asking this question?
Because I am working with a different version of MongoDB
Because I have just stored a decimal number in my DB without registering any serializers as instructed in the answer of the similar question And no error was thrown.
My MongoDB schema looks like this:
rating:{
type: Number,
required: true
}
So my question is, is there anything wrong with the way I have implemented this. Considering that I have already stored a decimal number in my DB. Is it okay to store decimal numbers with the current schema? Or is this a setup for errors in the future because I am missing a step?
Thank you.
The Number type is a floating point numeric representation that cannot accurately represent decimal values. This may be fine if your use case does not require precision for floating point numbers, but would not be suitable if accuracy matters (for example, for fractional currency values).
If you want to store and work with decimal values with accuracy you should instead use the Decimal128 type in Mongoose. This maps to the Decimal 128 (aka NumberDecimal) BSON data type available in MongoDB 3.4+, which can also be manipulated in server-side calculations using the Aggregation Framework.
If your rating field doesn't require exact precision, you could continue to use the Number type. One reason to do so is that the Number type is native to JavaScript, while Decimal128 is not.
For more details, see A Node.js Perspective on MongoDB 3.4: Decimal Type.
Related
I'm using Azure Logic apps and one of the steps is to parse an API JSON response. I'm uploading a payload to generate the schema.
One of my properties is a decimal type for Tax, specific in the JSON as “Number” type
The value in my source JSON comes through as this…
"TaxAmount": 999.00
However when its parsed it is set as "Integer"
When I change the value to...
"TaxAmount": 999.01
It will correctly come through as a "Number" type
Is there a way I can define the value of 999.00 and it be parsed as a “Number” rather than an “Integer”?
Any help would be appreciated
One workaround is to directly(i.e., manually) change the type of the variable while parsing. Something like
to
Unfortunatly, no.
Some programming languages and parsers use different internal
representations for floating point numbers than they do for integers.
For consistency, integer JSON numbers SHOULD NOT be encoded with a
fractional part.
https://json-schema.org/draft/2020-12/json-schema-core.html#integers
Note that this is SHOULD NOT, so it MAY be allowable.
But, consider, implementations may behave differently.
"SHOULD NOT" means, "you really should not do this unless you have a really good reason, and you better document it if you do".
If you need this, consider encoding the numbers in strings and using regular expression to do the validation.
The Problem
I recently started working on a small backend application to track my staking rewards and I stumbled upon the floating point math issues for the first time in my life.
After some research two options seem to be preferred in general:
DECIMAL(27,18): Very useful for ETH tokens (which mostly have 18 decimals).
BIGINT: Useful but I think it will run out of space for tokens like Shiba Inu.
The Question(s)
If I store the tokens as DECIMAL, will I have to perform the operations in the database to avoid the floating point math issue?
Would it be a better approach to store everything as int and then divide by the token's decimal digits, stored separatedly? (Maybe something like: NUMERIC(72,0)?)
Is there a third option or other problems I'm not taking into account?
I have a SQL table which has two columns Latitude and Longitude with values 29.47731 and -98.46272 respectively. Column datatype is float in SQL server.
When we retrieve the record using EF core like _context.Table.ToListAsync(), it retrieves record but when it is converted to c# equivalent double? datatype, it adds extra digits like -98.4627199999999.
How can i avoid this? It should return same value as there in database.
SQL Float and C# Double are imprecise data types, they store very close approximations of the number, but often not the exact number. If you need to maintain the exact values, you should use SQL Numeric data type which will allow you to specify the number of significant digits; and while the C# Decimal data type is still imprecise, it will provide a higher precision approximation than Double does.
I you find out that your data isn't stable and you are dealing with decimal values, it is propably a 'precise notation' issue.
Try taking a look at these:
SQL:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver15
C#
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
Those should guide you trough using the most precise notation for your situation :-)
I am attempting to use Entity Framework and have a contact database that has Longitude and Latitude data from Google Maps.
The guide says that this should be stored as float.
I have created my POCO entity which includes Longitude as a float and Latitude as a float.
I have just noticed that in the database, these both come up as real.
There is still a lot of work to be done before I can get any data back from Google, I am very far away from testing and I was just wondering if anyone can tell me if this is going to be a problem later on?
Nope, that should be fine. Note that you may not get the exact same value back as you received from Google Maps, as that would have been expressed in decimal-formatted text. However, this is effectively a physical quantity, and thus more appropriate as a float/double/real/(whatever binary floating point type you like) than as a decimal. (Decimals are more appropriate for man-made quantities - particularly currency.)
If you look at the documentation for float and real you'll see that real is effectively float(24) - a 32-bit floating binary point type, just like float in C#.
EDIT: As noted in comments, if you want more than the significant 7-8 digits of accuracy provided by float, then you probably want double instead in C#, which would mean float(53) in SQL Server.
This link:
http://msdn.microsoft.com/en-us/library/aa258876(v=sql.80).aspx
explains that, in SQL Server, real is a synonym for float(24), using 4 bytes of data. In .NET a Single precision floating point number also uses 4 bytes, so these are pretty much equivalent:
http://msdn.microsoft.com/en-us/library/47zceaw7(v=vs.71).aspx
Right now I'm working on a project using CFM and MSSQL.
I have a numbering data in a field. I've set the DATATYPE as float.
example of the data:
3.1,3.2,3.3........,3.10, 3.11
My problem:
it appeared that 3.10 doesn't exist. instead it comes as 3.1, which means I have two of 3.1 number.
when i sort the data, it displayed as:
3.1, 3.1, 3.11, 3.2, 3.3 .. .. etc.
I don't know what went wrong.
Please help.
If you need 3.1 and 3.10 to represent different values for whatever unholy abomination of math you are trying to accomplish, you will need to use a textual datatype like varchar.
As long as you are using a numeric type, SQL and every other programming platform will likely enforce the universal laws of mathematics where 3.1 and 3.10 are the same value.
If you want to store the precision as well, either make a new column to store that information, or store it as a string, which you can parse into a float before you need it as a number. Strings sort nicely, too.