SQL INSERT INTO: comma as decimal - sql-server

My problem is that my customer runs his SQL Server on a Windows box and the country settings are set to "Germany".
This means, a decimal point is NOT a point ., it's a comma ,!
Inserting a double value to the database works like this
INSERT INTO myTable (myPrice) VALUES (16,5)
Works fine, so far.
The problem comes up if there is more than one value with decimal places in the statement like
INSERT INTO myTable (myPrice, myAmount) VALUES (16,5,10)
I get the error
Number of query values and destination fields are not the same.
Can I somehow "delimit" the values? Tried to add brackets around but this does not work.
Unfortunately I cannot change the language settings of the OS or the database because I am just writing some add-ons to an existing application.
Thank you!
ev

You must put the data in the format allowed by database. Even if you put data using comma... You may loose out numerical calculations.
If I get such situation.. I will check if comma is only required for visibility.. then I would display values in comma format while store them in decimal format.
This way data can be easily processed as numeric. But need to change it to and fro only for UI or display.
Based on this you may describe your situation in more detail if required.
EDIT: To verify my theory can you check if this insert statement has inserted 165 or 16.5 in the database.
INSERT INTO myTable (myPrice) VALUES (16,5);
select from mytable where myprice <17;

Related

Numeric value 'abc_0011O00001y31VpQAI' is not recognized in Snowflake

(Opening the following on behalf of a Snowflake client...)
When I try to insert into the table it threw below error:
Numeric value 'abc_0011O00001y31VpQAI' is not recognized
Have check the table DDL and found only 3 columns defined as NUMBER and rest as VARCHAR.
I checked the SELECT query and didnot find any string value in those NUMBER Datatype columns. Also tried searching in all the Varchar columns for the value 'abc_0011O00001y31VpQAI' , I didn't find any
I know one thing Snowflake doesn't always shows correct error. Am I missing anything here? Is there any way to fix it?
Both COL4_MRR and COL5_QUANTITY are NUMBER
INSERT INTO TABLE
(COL1_DATE, COL2_CD, COL3_CUST_NAME, COL3_LOC_NAME,COL4_MRR,COL5_QUANTITY)
SELECT
'2019-10-03' AS COL1_DATE ,
'AE' AS COL2_CD
,CUSTOMER_NAME AS COL3_CUST_NAME
,LOCATION_NAME AS COL3_LOC_NAME
,MRR_BILLED as COL4_MRR
,QTY_BILLED as COL5_QUANTITY
FROM SCHEMA.V_TABLEA
union all
SELECT
'2019-10-03' AS COL1_DATE ,
'BE' AS COL2_CD
,CUSTOMER_NAME AS COL3_CUST_NAME
,LOCATION_NAME AS COL3_LOC_NAME
,NULL as COL4_MRR
,QTY_BILLED as COL5_QUANTITY
FROM SCHEMA.V_TABLEB
I created a table_D same as original TABLE and tried inserting into it , it worked fine . Then Inserted into Original TABLE from table_D , it worked again .
Deleted those rows from original TABLE and reran the job , it worked fine.
There was no issue with data as all was Number only, I even tried with TRY_TO_NUMBER too. It inserted the data without any changes to the code.
...............
Client is currently waiting on a next day run to re-test to determine if this is either a bug or an issue with their data. In the meantime, we are interested to see if anyone else has run into similar challenges and have a viable recommendation. THANK YOU.
The error typically means you are trying to insert non-numeric data (like 'abc_0011O00001y31VpQAI') into a numeric column. It seems like the customer did everything right in testing and TRY_TO_NUMBER() is a great way to verify numeric data.
Do the SELECT queries run fine separately? If so, then I would check whether there might be a potential mismatch in the datatype of the columns and make sure they are in the right order.
I would also check whether or not the header is being skipped in the file (that may be where the 'abc_0011O00001y31VpQAI' is coming from since the customer did not see it in the data).
SELECT queries work fine, I tried creating a new table with same DDL as original and tried loading into that new table, it worked fine. Not sure why it is not loading into the original table

value is changed when select from DB

I have table in redshift data base. The table has column "PERCENTILE" defined as
Type: float8
Size: 17
I found that the value that I get different when run a select it as is or as a varchar.
select PERCENTILE , cast(PERCENTILE AS varchar) from ...
The result is:
0.156870898195838 | 0.15687089819583799
I do not know how to explain this. Is casting may change the value? Anyone can help?
When values appear different through typecasting, it is often an indication that your SQL client is applying some formatting or is somehow interpreting the results.
For example, some SQL clients display TIMESTAMP WITH TIMEZONE values in the user's local timezone (as set on their computer). In such a case, doing SELECT date, date::text actually returns different values within the client, even though similar information is sent from Redshift.
In your situation, it appears that some rounding is taking place:
0.156870898195838
0.15687089819583799
The full value sent from Amazon Redshift is probably the 2nd value, but the client could be displaying the float8 rounded to fewer decimal places.

Issue with datatype Money in SQL SERVER vs string

I have a spreadsheet that gets all values loaded into SQL Server. One of the fields in the spreadsheet happens to be money. Now in order for everything to be displayed correcctly - i added a field in my tbl with Money as DataType.
When i read the value from spreadsheet I pretty much store it as a String, such as this... "94259.4". When it get's inserted in sql server it looks like this "94259.4000". Is there a way for me to basically get rid of the 0's in the sql server value when I grab it from DB - because the issue I'm running across is that - even though these two values are the same - because they are both compared as Strings - it thinks that there not the same values.
I'm foreseeing another issue when the value might look like this...94,259.40 I think what might work is limiting the numbers to 2 after the period. So as long as I select the value from Server with this format 94,259.40 - I thin I should be okay.
EDIT:
For Column = 1 To 34
Select Case Column
Case 1 'Field 1
If Not ([String].IsNullOrEmpty(CStr(excel.Cells(Row, Column).Value)) Or CStr(excel.Cells(Row, Column).Value) = "") Then
strField1 = CStr(excel.Cells(Row, Column).Value)
End If
Case 2 'Field 2
' and so on
I go through each field and store the value as a string. Then I compare it against the DB and see if there is a record that has the same values. The only field in my way is the Money field.
You can use the Format() to compare strings, or even Float For example:
Declare #YourTable table (value money)
Insert Into #YourTable values
(94259.4000),
(94259.4500),
(94259.0000)
Select Original = value
,AsFloat = cast(value as float)
,Formatted = format(value,'0.####')
From #YourTable
Returns
Original AsFloat Formatted
94259.40 94259.4 94259.4
94259.45 94259.45 94259.45
94259.00 94259 94259
I should note that Format() has some great functionality, but it is NOT known for its performance
The core issue is that string data is being used to represent numeric information, hence the problems comparing "123.400" to "123.4" and getting mismatches. They should mismatch. They're strings.
The solution is to store the data in the spreadsheet in its proper form - numeric, and then select a proper format for the database - which is NOT the "Money" datatype (insert shudders and visions of vultures circling overhead). Otherwise, you are going to have an expanding kluge of conversions between types as you go back and forth between two improperly designed solutions, and finding more and more edge cases that "don't quite work," and require more special cases...and so on.

INSERT Query SQL (Error converting data type nvarchar to (null))

I'm trying to run an INSERT query but it asks me to convert varchar to null. Here's the code:
INSERT Runtime.dbo.History (DateTime, TagName, vValue)
VALUES ('2015-09-10 09:00:00', 'ErrorComment', 'Error1')
Error message:
Error converting data type nvarchar to (null).
The problem is at the vValue column.
column vValue(nvarchar, null)
How it looks in the database:
The values inside vValue are placed by the program I'm using. I'm just trying to manually insert into the database.
Last post was with the wrong column, I apologize.
After contacting Wonderware support i found out that INSERT is not supported on the vValue column by design. It’s a string value and updates are supposed to carry out via the StringHistory table.
What is the type of the column value in the database ?
If it's float, you should insert a number, not string.
Cast "error1" to FLOAT is non-sense.
Float is a number exemple : 1.15, 12.00, 150.15
When you try to CAST "Error1" to float, he tries to transform the text "error1" to number and he can't, it's logic.
You should insert a number in the column.
I think I can help you with your problem since I've got a decent test environment to experiment with.
Runtime.dbo.History is not a table you can interact directly with, it is a View. In our case here the view is defined as:
select * from [INSQL].[Runtime].dbo.History
...Which I believe implies the History data you are viewing is from the Historian flat file storage itself, a Wonderware Proprietary system. You might see some success if you expand the SQL Server Management Studio's
Server Objects -> Linked Servers -> INSQL
...and play with the data there but I really wouldn't recommend it.
With that said, for what reason do you need to insert tag history? There might be other workarounds for the purpose you need.

A 99.99 numeric from flat file doesn't want to go in a NUMERIC(4,2)'SQL Server

I have a csv file :
1|1.25
2|23.56
3|58.99
I want to put this value in a SQL Server table with SSIS.
I have created my table :
CREATE TABLE myTable( ID int, Value numeric(4,2));
My problem is that I have to create a Derived Column Transformation to specify my cast :
(DT_NUMERIC,4,2)(REPLACE(Value,".",","))
Otherwise, SSIS don't seem to be able to put my Value in my column, and fill my column with null value.
And I think it is tooooo ugly to do it this way. I want my Derived Column Transformation be here for real new derived column, and not some simple cast that I think SSIS have to detect.
So, what is the standard way to use SSIS to resolve this problem ?
BULK
INSERT myTable
FROM 'c:\csvtest1.txt'
WITH
(
FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n'
)
csvtest1.txt
1|1.25
2|23.56
3|58.99
You're loading this up in international format (56,99 in lieu of 56.99). You need to load this as 56.99 for SQL Server to recognize it as such. Take out the REPLACE(Value, ".", ",") and just have the code be:
(DT_NUMERIC,4,2)(Value)
Handle the formatting on the application side, not on the data side. The comma is a reserved operator in SQL Server and you can't change that fact.
Haven't used SSIS a whole lot, but can't you set the regional settings on the File Source or at least set the decimal separator?
Can you change your SSIS source column to be in the correct datatype?
If you have control over the production of your file, I'd suggest you to format values without ANY decimal or thousand separation : in this case I'ld have a file with values:
1|125
2|2356
3|5899
and then apply a division by 100 when importing the data. While it has the advantage of being culture-independent, of course it has some drawbacks:
1) First of all, it may not be possible to impose this format of the file.
2) It presumes that all numeric values are formatted accordingly, in this case every value is multiplied by 100; this can be an issue if you have to mix values from countries with different decimal positions (many have two decimals, but some have zero decimals).
3) It may severely impact with other routines, maybe out of your control
Therefore, this can really be an option if you have total control on the csv file.

Resources