value is changed when select from DB - database

I have table in redshift data base. The table has column "PERCENTILE" defined as
Type: float8
Size: 17
I found that the value that I get different when run a select it as is or as a varchar.
select PERCENTILE , cast(PERCENTILE AS varchar) from ...
The result is:
0.156870898195838 | 0.15687089819583799
I do not know how to explain this. Is casting may change the value? Anyone can help?

When values appear different through typecasting, it is often an indication that your SQL client is applying some formatting or is somehow interpreting the results.
For example, some SQL clients display TIMESTAMP WITH TIMEZONE values in the user's local timezone (as set on their computer). In such a case, doing SELECT date, date::text actually returns different values within the client, even though similar information is sent from Redshift.
In your situation, it appears that some rounding is taking place:
0.156870898195838
0.15687089819583799
The full value sent from Amazon Redshift is probably the 2nd value, but the client could be displaying the float8 rounded to fewer decimal places.

Related

ORA-22835: Buffer too small and ORA-25137: Data value out of range

We are using a software that has limited Oracle capabilities. I need to filter through a CLOB field by making sure it has a specific value. Normally, outside of this software I would do something like:
DBMS_LOB.SUBSTR(t.new_value) = 'Y'
However, this isn't supported so I'm attempting to use CAST instead. I've tried many different attempts but so far these are what I found:
The software has a built-in query checker/validator and these are the ones it shows as invalid:
DBMS_LOB.SUBSTR(t.new_value)
CAST(t.new_value AS VARCHAR2(10))
CAST(t.new_value AS NVARCHAR2(10))
However, the validator does accept these:
CAST(t.new_value AS VARCHAR(10))
CAST(t.new_value AS NVARCHAR(10))
CAST(t.new_value AS CHAR(10))
Unfortunately, even though the validator lets these ones go through, when running the query to fetch data, I get ORA-22835: Buffer too small when using VARCHAR or NVARCHAR. And I get ORA-25137: Data value out of range when using CHAR.
Are there other ways I could try to check that my CLOB field has a specific value when filtering the data? If not, how do I fix my current issues?
The error you're getting indicates that Oracle is trying to apply the CAST(t.new_value AS VARCHAR(10)) to a row where new_value has more than 10 characters. That makes sense given your description that new_value is a generic audit field that has values from a large number of different tables with a variety of data lengths. Given that, you'd need to structure the query in a way that forces the optimizer to reduce the set of rows you're applying the cast to down to just those where new_value has just a single character before applying the cast.
Not knowing what sort of scope the software you're using provides for structuring your code, I'm not sure what options you have there. Be aware that depending on how robust you need this, the optimizer has quite a bit of flexibility to choose to apply predicates and functions on the projection in an arbitrary order. So even if you find an approach that works once, it may stop working in the future when statistics change or the database is upgraded and Oracle decides to choose a different plan.
Using this as sample data
create table tab1(col clob);
insert into tab1(col) values (rpad('x',3000,'y'));
You need to use dbms_lob.substr(col,1) to get the first character (from the default offset= 1)
select dbms_lob.substr(col,1) from tab1;
DBMS_LOB.SUBSTR(COL,1)
----------------------
x
Note that the default amount (= length) of the substring is 32767 so using only DBMS_LOB.SUBSTR(COL) will return more than you expects.
CAST for CLOB does not cut the string to the casted length, but (as you observes) returns the exception ORA-25137: Data value out of range if the original string is longert that the casted length.
As documented for the CAST statement
CAST does not directly support any of the LOB data types. When you use CAST to convert a CLOB value into a character data type or a BLOB value into the RAW data type, the database implicitly converts the LOB value to character or raw data and then explicitly casts the resulting value into the target data type. If the resulting value is larger than the target type, then the database returns an error.

Issue with datatype Money in SQL SERVER vs string

I have a spreadsheet that gets all values loaded into SQL Server. One of the fields in the spreadsheet happens to be money. Now in order for everything to be displayed correcctly - i added a field in my tbl with Money as DataType.
When i read the value from spreadsheet I pretty much store it as a String, such as this... "94259.4". When it get's inserted in sql server it looks like this "94259.4000". Is there a way for me to basically get rid of the 0's in the sql server value when I grab it from DB - because the issue I'm running across is that - even though these two values are the same - because they are both compared as Strings - it thinks that there not the same values.
I'm foreseeing another issue when the value might look like this...94,259.40 I think what might work is limiting the numbers to 2 after the period. So as long as I select the value from Server with this format 94,259.40 - I thin I should be okay.
EDIT:
For Column = 1 To 34
Select Case Column
Case 1 'Field 1
If Not ([String].IsNullOrEmpty(CStr(excel.Cells(Row, Column).Value)) Or CStr(excel.Cells(Row, Column).Value) = "") Then
strField1 = CStr(excel.Cells(Row, Column).Value)
End If
Case 2 'Field 2
' and so on
I go through each field and store the value as a string. Then I compare it against the DB and see if there is a record that has the same values. The only field in my way is the Money field.
You can use the Format() to compare strings, or even Float For example:
Declare #YourTable table (value money)
Insert Into #YourTable values
(94259.4000),
(94259.4500),
(94259.0000)
Select Original = value
,AsFloat = cast(value as float)
,Formatted = format(value,'0.####')
From #YourTable
Returns
Original AsFloat Formatted
94259.40 94259.4 94259.4
94259.45 94259.45 94259.45
94259.00 94259 94259
I should note that Format() has some great functionality, but it is NOT known for its performance
The core issue is that string data is being used to represent numeric information, hence the problems comparing "123.400" to "123.4" and getting mismatches. They should mismatch. They're strings.
The solution is to store the data in the spreadsheet in its proper form - numeric, and then select a proper format for the database - which is NOT the "Money" datatype (insert shudders and visions of vultures circling overhead). Otherwise, you are going to have an expanding kluge of conversions between types as you go back and forth between two improperly designed solutions, and finding more and more edge cases that "don't quite work," and require more special cases...and so on.

SQL INSERT INTO: comma as decimal

My problem is that my customer runs his SQL Server on a Windows box and the country settings are set to "Germany".
This means, a decimal point is NOT a point ., it's a comma ,!
Inserting a double value to the database works like this
INSERT INTO myTable (myPrice) VALUES (16,5)
Works fine, so far.
The problem comes up if there is more than one value with decimal places in the statement like
INSERT INTO myTable (myPrice, myAmount) VALUES (16,5,10)
I get the error
Number of query values and destination fields are not the same.
Can I somehow "delimit" the values? Tried to add brackets around but this does not work.
Unfortunately I cannot change the language settings of the OS or the database because I am just writing some add-ons to an existing application.
Thank you!
ev
You must put the data in the format allowed by database. Even if you put data using comma... You may loose out numerical calculations.
If I get such situation.. I will check if comma is only required for visibility.. then I would display values in comma format while store them in decimal format.
This way data can be easily processed as numeric. But need to change it to and fro only for UI or display.
Based on this you may describe your situation in more detail if required.
EDIT: To verify my theory can you check if this insert statement has inserted 165 or 16.5 in the database.
INSERT INTO myTable (myPrice) VALUES (16,5);
select from mytable where myprice <17;

INSERT Query SQL (Error converting data type nvarchar to (null))

I'm trying to run an INSERT query but it asks me to convert varchar to null. Here's the code:
INSERT Runtime.dbo.History (DateTime, TagName, vValue)
VALUES ('2015-09-10 09:00:00', 'ErrorComment', 'Error1')
Error message:
Error converting data type nvarchar to (null).
The problem is at the vValue column.
column vValue(nvarchar, null)
How it looks in the database:
The values inside vValue are placed by the program I'm using. I'm just trying to manually insert into the database.
Last post was with the wrong column, I apologize.
After contacting Wonderware support i found out that INSERT is not supported on the vValue column by design. It’s a string value and updates are supposed to carry out via the StringHistory table.
What is the type of the column value in the database ?
If it's float, you should insert a number, not string.
Cast "error1" to FLOAT is non-sense.
Float is a number exemple : 1.15, 12.00, 150.15
When you try to CAST "Error1" to float, he tries to transform the text "error1" to number and he can't, it's logic.
You should insert a number in the column.
I think I can help you with your problem since I've got a decent test environment to experiment with.
Runtime.dbo.History is not a table you can interact directly with, it is a View. In our case here the view is defined as:
select * from [INSQL].[Runtime].dbo.History
...Which I believe implies the History data you are viewing is from the Historian flat file storage itself, a Wonderware Proprietary system. You might see some success if you expand the SQL Server Management Studio's
Server Objects -> Linked Servers -> INSQL
...and play with the data there but I really wouldn't recommend it.
With that said, for what reason do you need to insert tag history? There might be other workarounds for the purpose you need.

how to write a sql command to convert two sql columns and include a calculation in same query

Following is my code :
SELECT sum_date, sum_accname, sum_description,
CASE WHEN debit NOT LIKE '%[^.0-9]%'
THEN CAST(debit as DECIMAL(9,2))
ELSE NULL
END AS debit,
CASE WHEN credit NOT LIKE '%[^.0-9]%'
THEN CAST(credit as DECIMAL(9,2))
ELSE NULL
END AS credit
FROM sum_balance
while viewing the report it shows an error : Error converting data type varchar to numeric. And i need sum of credit and debit column in the same query. Tried with the above code if i include only one column in the query for conversion its working bt adding another column in conversion it shows the error. I can't figure out the problem
The problem is that your debit and credit columns are text and thus can contain anything. You're attempting to limit it to only numeric values with NOT LIKE '%[^.0-9]%' but that's not enough because you could have a value like 12.3.6.7 which cannot convert to a decimal.
There is no way in SQL Server that I'm aware of using LIKE to achieve what you're trying to achieve, because LIKE does not support the full range of regex operations -- in fact, it's quite limited. In my opinion, you're torturing the database design by trying to multi-purpose those fields. If you're looking to report on numeric data, then store them in numeric fields. That assumes, of course, you have control over the schema.

Resources