Storing zero after decimal point - sql-server

Right now I'm working on a project using CFM and MSSQL.
I have a numbering data in a field. I've set the DATATYPE as float.
example of the data:
3.1,3.2,3.3........,3.10, 3.11
My problem:
it appeared that 3.10 doesn't exist. instead it comes as 3.1, which means I have two of 3.1 number.
when i sort the data, it displayed as:
3.1, 3.1, 3.11, 3.2, 3.3 .. .. etc.
I don't know what went wrong.
Please help.

If you need 3.1 and 3.10 to represent different values for whatever unholy abomination of math you are trying to accomplish, you will need to use a textual datatype like varchar.
As long as you are using a numeric type, SQL and every other programming platform will likely enforce the universal laws of mathematics where 3.1 and 3.10 are the same value.

If you want to store the precision as well, either make a new column to store that information, or store it as a string, which you can parse into a float before you need it as a number. Strings sort nicely, too.

Related

Entity framework Core is adding digits when converting float (db data type) to double in c#

I have a SQL table which has two columns Latitude and Longitude with values 29.47731 and -98.46272 respectively. Column datatype is float in SQL server.
When we retrieve the record using EF core like _context.Table.ToListAsync(), it retrieves record but when it is converted to c# equivalent double? datatype, it adds extra digits like -98.4627199999999.
How can i avoid this? It should return same value as there in database.
SQL Float and C# Double are imprecise data types, they store very close approximations of the number, but often not the exact number. If you need to maintain the exact values, you should use SQL Numeric data type which will allow you to specify the number of significant digits; and while the C# Decimal data type is still imprecise, it will provide a higher precision approximation than Double does.
I you find out that your data isn't stable and you are dealing with decimal values, it is propably a 'precise notation' issue.
Try taking a look at these:
SQL:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver15
C#
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
Those should guide you trough using the most precise notation for your situation :-)

How to deal with decimals in MongoDB v3.6

I am working with MongoDB v3.6.3.
I have seen a similar question that recieved a good answer. So why am I asking this question?
Because I am working with a different version of MongoDB
Because I have just stored a decimal number in my DB without registering any serializers as instructed in the answer of the similar question And no error was thrown.
My MongoDB schema looks like this:
rating:{
type: Number,
required: true
}
So my question is, is there anything wrong with the way I have implemented this. Considering that I have already stored a decimal number in my DB. Is it okay to store decimal numbers with the current schema? Or is this a setup for errors in the future because I am missing a step?
Thank you.
The Number type is a floating point numeric representation that cannot accurately represent decimal values. This may be fine if your use case does not require precision for floating point numbers, but would not be suitable if accuracy matters (for example, for fractional currency values).
If you want to store and work with decimal values with accuracy you should instead use the Decimal128 type in Mongoose. This maps to the Decimal 128 (aka NumberDecimal) BSON data type available in MongoDB 3.4+, which can also be manipulated in server-side calculations using the Aggregation Framework.
If your rating field doesn't require exact precision, you could continue to use the Number type. One reason to do so is that the Number type is native to JavaScript, while Decimal128 is not.
For more details, see A Node.js Perspective on MongoDB 3.4: Decimal Type.

Quantity reference type, etc

I have been working on ADempiere these past few days and I am confused about something.
I created a new column on my database table named Other_Number with the reference type Quantity. Max length is 20.
On my Java source, I used BigDecimal.
Now every time I try to input exactly 20 digits on the Other_Number field, the last 4 digits gets rounded. Say if I input 12345678901234567891. When I try to save it, it becomes 12345678901234567000.
Other than that. All the records that gets saved on the database (PSQL) gets appended with ".000000000000" (that's 12 zeros).
Now I need to do something so that when I input 20 digits, the last 4 digits don't get rounded.
Also I need to get rid of that ".000000000000"
Can you please tell me why this is happening?
ADempiere as a financials ERP software is crucial in how it deals with financial amounts. In the database the exact BigDecimal value has to maintain its data integrity. Precision and rounding has been done as perfect as possible in code. Been part of the established famous project Compiere ERP, where iDempiere and Openbravo are also forks from, such financial amount management is already well defined and solved.
Perhaps you need to set precision in its appropriate window http://wiki.idempiere.org/en/Currency_%28Window_ID-115%29
If it's not actually a number you want but rather some kind of reference field that contains only numeric digits, change the definition in the Application Dictionary to be:
Reference: String
Length: 20
Value Format: 00000000000000000000 (i.e. 20 Zeros!)
This will force the input be numeric only (i.e. alpha characters will be ignored!) and because it is a String there will be no rounding
Adempiere will support upto 14(+5) digits (trillions) amount/quantity of business (USD currency).
What currency you are using, is it possible to use this much amount/quantity in ERP system ?
If you want to change the logic, then you can change logic at the getNumberFormat method of DispalyType.java class.
What was the business scenario?
In Adempiere java code "setScale" Method is used to rounded the value
Example:
BigDecimal len= value
len= len.setScale(2,4);
setLength(len);

Displaying special characters with Chinese Locale in c

I have a requirement to adapt an existing, non-unicode, c project to display Chinese characters.  As there is a short deadline, and I'm new(ish) to C and encoding I've gone down the route of changing the sytem locale to Simplified Chinese PRC in order to support display of Chinese text in the gui application.  This has in turn changed the encoding (in visual studio 2010) in the project to Chinese Simplified (GB2312).
Everything works except that special characters, eg: degree sign, superscript 2, etc are displayed as question marks.  I believe this is because we used to pass \260 i.e. the octal value of the degree symbol in the ascii table, and this no longer equates to anything in the gb2312 table. 
The workflow for displaying a degree symbol in a keypad was as follows: 
display_function( data, '\260' ); //Pass the octal value of the degree symbol to the keypad 
This display_function is used to translate the integer inputs into strings for display on the keypad: 
data->[ pos ] = (char) ch; 
Essentially I need to get this (and other special chars) to display correctly.  Is there a way to pass this symbol using the current setup? 
According to the char list for gb23212 the symbol is supported so my current thinking is to work out the octal value of the symbol and keep the existing functions as they are.  These currently pass the values around as chars.  Using the table below: 
http://ash.jp/code/cn/gb2312tbl.htm. 
and the following formula to obtain the octal value: 
octal number associated with the row, multiplied by 10 and added to the octal number associated with the column. 
I believe this would be A1E0 x 10 + 3 = 414403. 
However, when I try and pass this to display_function I get "error C2022: '268' : too big for character".
Am I going about this wrong?  I'd prefer not to change the existing functions as they are in widespread use, but do I need to change the function to use a wide char? 
Apologies if the above is convoluted and filled with incorrect assumptions!  I've been trying to get my head round this for a week or two and encodings, char sets and locales just seem to get more and more confusing!
thanks in advance
If current functions support only 8-bits characters, and you need to use them to display 16-bits characters, then probably your guess is correct - you may have to change functions to use something like "wchar" instead of "char".
Maybe also duplicate them with other name to provide compatibility for other users in case these functions are used in other projects.
But if it's only one project, then maybe you will want to check possibility to replace "char" by "wchar" in almost all places of the project.

C# Float to Real in SQL Server

I am attempting to use Entity Framework and have a contact database that has Longitude and Latitude data from Google Maps.
The guide says that this should be stored as float.
I have created my POCO entity which includes Longitude as a float and Latitude as a float.
I have just noticed that in the database, these both come up as real.
There is still a lot of work to be done before I can get any data back from Google, I am very far away from testing and I was just wondering if anyone can tell me if this is going to be a problem later on?
Nope, that should be fine. Note that you may not get the exact same value back as you received from Google Maps, as that would have been expressed in decimal-formatted text. However, this is effectively a physical quantity, and thus more appropriate as a float/double/real/(whatever binary floating point type you like) than as a decimal. (Decimals are more appropriate for man-made quantities - particularly currency.)
If you look at the documentation for float and real you'll see that real is effectively float(24) - a 32-bit floating binary point type, just like float in C#.
EDIT: As noted in comments, if you want more than the significant 7-8 digits of accuracy provided by float, then you probably want double instead in C#, which would mean float(53) in SQL Server.
This link:
http://msdn.microsoft.com/en-us/library/aa258876(v=sql.80).aspx
explains that, in SQL Server, real is a synonym for float(24), using 4 bytes of data. In .NET a Single precision floating point number also uses 4 bytes, so these are pretty much equivalent:
http://msdn.microsoft.com/en-us/library/47zceaw7(v=vs.71).aspx

Resources