SQL Server 2000 invalid Floating Point Operation - sql-server

When I run my .net app connected to SQL Server 2000, I get the error "invalid Floating Point Operation". I did search for the error cause http://fugato.net/2005/02/08/sql-server-nastiness found this link which says there may be Bogus Data in one of the columns.
I have a backup from a month old data, when I connect to the old database it works fine.
How do I filter out the bogus data in the table?

One method might be to use a cursor to perform the problematic operation row by row printing id's as you go along, when the error occurs you can refer to the printed id's to see which row contains the erroneous data.
There is probably a better way however, this is just what came to mind!

You need to narrow down your problem:
When exactly does this happen? What SQL query is being executed that causes the problem??
Once you have the query - look at what the query does; check those tables - are you possibly converting a VARCHAR field into a numeric value, and some values aren't all numeric?
Or are you reading data using a SqlDataReader and you're not paying attention to the fact the data contained in SQL Server might not be what you expect??

select col
from tbl
group by col
having count(col) = 1
there's probably only one instance of the bad value

Do you know which table(s) are causing the problem? Can you run a SELECT * FROM... against these tables + get results? If so, select from the tables and order by each of the FLOAT / REAL / NUMERIC / DECIMAL columns and look at the ends to see if there are any 'oddities'.

Related

Unexplained 'Invalid Operation' error in Access query with SQL backend

I am trying to migrate the entire backend of an Access application onto SQL Server. The first part of my project involves moving all of the tables whilst making minimum changes after the migration (no SQL Views, Pass-through queries etc. yet).
I have two queries in particular that I am using here:
ProductionSystemUnAllocatedPurchases - Which executes and returns a resultset successfully.
This is the full formula (sorry its extremely complex) for QtyAvailableOnPurchase:
QtyAvailableOnPurchase: I believe this field could be the problem here?
IIf((IIf([Outstanding Qty]>([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),
([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),[Outstanding Qty]))>0,
(IIf([Outstanding Qty]>([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),([P-
ORDER-T with Qty Balance]![QTY]-[SumOfQty]),[Outstanding Qty])),0)
ProductionSystemUnAllocatedPurchasesTotal - Which gives an 'Invalid Operation' error message
Now the strange thing for me is that the first query works perfectly fine, but the second which uses the first as a source table, gives me this error message when executing. This query works perfectly fine with an access backend, but fails with SQL Server tables.
Any Ideas?
Can QtyAvailableOnPurchase be NULL? That would explain why Sum fails. Use Nz(QtyAvailableOnPurchase,0) instead.
My approach is to decompose queries. Create two queries :
First query selects needed data
Second query applies group operations (e.g. Sum)
You'll get easy way to check every step.
I have managed to find a solution to this error.
It seems that the problem is not so much with the query but rather the data type on SQL Server. SQL Server Migration Assistant (SSMA) automatically maps any Number (Double) fields to float on SQL Server. This mapping needed manually changing to Decimal.
Now according to this SO post, Decimal is the preferred for its precision up to 38 points (which is more than enough for my application), While float allows more than this, the data is stored in approximates.
Source: Difference between numeric, float and decimal in SQL Server

"Conversion failed because the data value overflowed the specified type" error applies to only one column of the same table

I am trying to import data from database access file into SQL server. To do that, I have created SSIS package through SQL Server Import/Export wizard. All tables have passed validation when I execute package through execute package utility with "validate without execution" option checked. However, during the execution I received the following chunk of errors (using a picture, since blockquote uses a lot of space):
Upon the investigation, I found exactly the table and the column, which was causing the problem. However, this is problem I have been trying to solve for a couple days now, and I'm running dry on possible options.
Structure of the troubled table column
As noted from the error list, the trouble occurs in RHF Repairs table on the Date Returned column. In Access, the column in question is Date/Time type. Inside the actual table, all inputs are in a form of 'mmddyy', which when clicked upon, turn into 'mm/dd/yyyy' format:
In SSIS package, it created OLEDB Source/Destination relationship like following:
Inside this relationship, in both output columns and external columns data type is DT_DATE (I still think it is a key cause of my problems). What bugs me the most is that the adjacent to Date Returned column is exactly the same as what I described above, and none of the errors applied to it or any other columns of the same type, Date Returned is literally the only black sheep in the flock.
What have I tried
I have tried every option from the following thread, the error remains the same.
I tried Data conversion option, trying to convert this column into datestamp or even unicode string. It didn't work.
I tried to specify data type with the advanced source editor to both datestamp/unicode string. I tried specifying it only in output columns, tried in both external and output columns, same result.
Plowing through the data in access table also did not give me anything. All of them use the same 6-char formatting through it all.
At this point, I literally exhausted all options I could think of. Can you please point me in the right direction on what else I could possibly try to resolve it, since it drives me nuts for last two days.
PS: On my end, I will plow through each row individually, while not trying to get discouraged by the fact that there are 4000+ row entries...
UPDATE:
I resolved this matter by plowing through data. There were 3 faulty entries among 4000+ rows... Since the issue was resolved in a manner unlikely to help others, please close that question.
It sounds to me like you have one or more bad dates in the column. With 4,000 rows, I actually would visually scan and look for something very short or very long.
You could change your source to selecting top 1 instead of all 4,000. Do those insert? If so, that would lend weight to the bad date scenario. If 1 row does not flow through, it is another issue.
(I will just share my experience, how I overcame this problem, in case it helps someone)
My scenario:
One of the column Identifier in the ole db data source has changed from int to bigint. I was getting the error message - Conversion failed because the data value overflowed the specified type.
Basically, it was telling me the source data size was greater than the destination data size.
What I have tried:
In the ole db data source and destination both places, I clicked "show advanced editior", checkd the data type Identifier was bigint. But still, I was getting the error message
The solution worked for me:
In the ole db data source--> show advanced edition option--> Input and Output Properties--> OLE DB Source Output--> there are two options - External columns & Output columns.
In my case, though the Identifier column in the External columns was showing the data type bigint, but in the Output columns was showing the data type int. So, I changed the data type to bigint and it has solved my problem.
Now and then I get this problem, specially when I have a big table with lots of data.
I hope it helps.
We had this error when someone had entered the year as 216 instead of 2016. The data source was reading the data ok but it was failing on the OLEDB destination task.
We use a script task in the data flow for validation. By adding a check that dates aren't too far in the past we are able to trap this kind of error and at least generate a meaningful error message to find and correct the problem quickly.

Access 2000 - invalid character value for cast specification (#0) - Access to SQL

Lately I have migrated my Access 2000 backend data and tables to a 2012 SQL server. In the access frontend I have linked the SQL tables that were migrated. Most of it is working fine except for (now) one form.
In this form the data is being loaded from the SQL server using this query:
SELECT * FROM qryAbonementens WHERE EindDatum is null or EindDatum>=now()
It also used a filter and sort:
((Lookup_cmbOrderNummer.Omschrijving="GJK"))
And the sort:
Lookup_cmbOrderNummer.Omschrijving
These things may be irrelevant but Ill just post as much as possible.
The data loads in the form perfectly, however when I try to change a record in the form, I keep getting the:
error invalid character value for cast specification (#0)
While checking out posts with the same problem I encountered this post:
MS Access error "ODBC--call failed. Invalid character value for cast specification (#0)"
This made me believe that I was missing a PK somewhere so First I checked the linked table in Access design mode:
Tekst = text, Numeriek = numeric, Datum/tijd = date (sorry for it being dutch).
The same table in SQL looked like this:
They both have PK so I guess this is not the problem.
Though, when looking at both datatypes you can see 2 differences on the InkoopPrijs and VerkoopPrijs fields. In SQL these two are decimals(30,2) and in the design view in the linked access table they are, I guess unknown, and so they are being cast to text values. Perhaps this is the cause of my error message?
The record I am trying to change and which gives the error is this one (but it is on all the records):
I've read somewhere that adding a timestamp field to the SQL server could help but I have no clue it also works in my case or how to do this.
As you have guessed, the decimal(30, 2) columns are the problem.
They are too large for Access to be used as numbers.
I can reproduce the problem with Access 2010, although I can enter numeric data into the field. But when I enter text, I get the exact same error message.
decimal(18,2) works fine (it's the default decimal precision for Sql Server 2008).
Surely you don't have prices in the 10^30 range? :-)
You might also consider using the money datatype instead, although I don't know how well Access 2000 works with that.
Alright I got it fixed. #Andre451 post about changing the 30,2 decimal values in the SQL server to 18,2 gave me the the record is changed by another user error. This caused me to look differently at the problem and instead of fixing the
error invalid character value for cast specification (#0)
error I looked at the
record was changed by another user error
I came across this post: Linked Access DB "record has been changed by another user"
Here someone suggested to add a TimeStamp field to the SQL table. So I did and now it seems to work again! And it also seems to work with the original (decimal 30,2) value!

string or binary data would be truncated - but don't know parameter values

The company program prints out the string or binary data would be truncated error to an error table when trying to insert into a specific table. I do realize from other posts that the cause of this is the current table structure. There are one or more fields that are too short, but unfortunately I do not have access to the values of the actual query. Instead, it dumps them as #parametername1, #parametername2, etc. from the stacktrace.
Is there any way I can see from some kind of monitoring tool in SQL Server 2012 what parameter and value failed? I mean, the SQL Server has returned the error, so supposedly I can log it if I repeat the error?
You can setup a trace in SQL Profiler. For the statement, it will give you list of columns and matching values being inserted.

Truncation error SQL server 2005 Delete/Insert

So I am trying to do a bulk insert with SSIS and continually get:
"Microsoft SQL Native Client" Hresult: 0x80004005 Description: "String or binary data would be truncated."
Even though I already have a data conversion for every column into the exact same type as the table that the rows are getting inserted into. I used a view and the data looks like it supposed to just before the DB insert step. Still get the error.
Next I went into sql server management studio and setup an insert query into that damned table and still get the same truncation error. I then did a set ANSI_WARNINGS OFF and the insert works data looks good in the table. Now when I try to delete this row I get the Truncation error.
My question besides any basic input to the situation is how can I turn off the ANSI_WARNINGS within SSIS so that the bulk load can go though?
It sounds like you have a column that is too narrow to accept the data you are submitting.
Can you verify if this is or isn't the case?
I had a very similar issue arise frequently while we were nailing down a schema with a third party.
Can you select a LEN of all of the columns in the view? That could help find the issue.
Other than that, the only way I have found is to print out a report of the actual lengths of the source data columns.
Sounds like you've got one row (possibly more, but it only takes one!) where your data value exceeds the length of the table columns. Doing a data conversion to the shorter type will MOVE the error to whatever transform does the conversion from the Destination. What I'd recommend is creating a Flat File Destination, and tying the error output of your transforms to it. Change the error result to 'Redirect Row'. This will allow all the valid rows to go through, and provide you with a copy of the row(s) that are getting truncated for you to manually handle.
Are there triggers on the table you're inserting into? Then the error may come from an action that the trigger takes.
Turns out that in SSIS you can setup the OLE DB Destination with "Data Access Mode > Table or view: Fast Mode". When I chose this setting the bulk insert went through without any warnings or errors and the data looks perfect in the database. Not sure what this change did exactly but it worked and after 16hours on one SSIS insert I'm happy with results.
Thanks for the suggestions.

Resources