Unexplained 'Invalid Operation' error in Access query with SQL backend - sql-server

I am trying to migrate the entire backend of an Access application onto SQL Server. The first part of my project involves moving all of the tables whilst making minimum changes after the migration (no SQL Views, Pass-through queries etc. yet).
I have two queries in particular that I am using here:
ProductionSystemUnAllocatedPurchases - Which executes and returns a resultset successfully.
This is the full formula (sorry its extremely complex) for QtyAvailableOnPurchase:
QtyAvailableOnPurchase: I believe this field could be the problem here?
IIf((IIf([Outstanding Qty]>([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),
([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),[Outstanding Qty]))>0,
(IIf([Outstanding Qty]>([P-ORDER-T with Qty Balance]![QTY]-[SumOfQty]),([P-
ORDER-T with Qty Balance]![QTY]-[SumOfQty]),[Outstanding Qty])),0)
ProductionSystemUnAllocatedPurchasesTotal - Which gives an 'Invalid Operation' error message
Now the strange thing for me is that the first query works perfectly fine, but the second which uses the first as a source table, gives me this error message when executing. This query works perfectly fine with an access backend, but fails with SQL Server tables.
Any Ideas?

Can QtyAvailableOnPurchase be NULL? That would explain why Sum fails. Use Nz(QtyAvailableOnPurchase,0) instead.

My approach is to decompose queries. Create two queries :
First query selects needed data
Second query applies group operations (e.g. Sum)
You'll get easy way to check every step.

I have managed to find a solution to this error.
It seems that the problem is not so much with the query but rather the data type on SQL Server. SQL Server Migration Assistant (SSMA) automatically maps any Number (Double) fields to float on SQL Server. This mapping needed manually changing to Decimal.
Now according to this SO post, Decimal is the preferred for its precision up to 38 points (which is more than enough for my application), While float allows more than this, the data is stored in approximates.
Source: Difference between numeric, float and decimal in SQL Server

Related

Simple SSIS from Oracle to SQL Server drops rows

I have a very simple SQL query in my SSIS (VS 2017) Data Flow. It connects to Oracle via Native OLE DB\Oracle Provider for OLE DB and uses SQL Command to query the Oracle view. The destination table is a SQL Server 2017 table. If I query only the first 20 columns or so (I am querying 57 columns), I get all 1,060,000ish records. As I start to add more columns, the rowcount drops. I have already removed any date fields from both tables, and have done quite a few data conversions (source table has several varchar2(4000) fields that need to be SUBSTR to reasonable lengths in the SQL destination table. All fields in the destination table are nullable. When I pull the SQL out of SSIS and run it in SQL Developer, I get the right row count. When I run it in SSIS, it drops from 1.06 M rows to around 28k. I already tried the SQLChick hack (https://www.sqlchick.com/entries/2012/9/2/resolving-missing-records-in-ssis-from-oracle-source.html) doesn't work and causes connection errors (I had to use VS Code to add that property to my Oracle connection, then when I went back to VS, the connection was broken. When opening it back up to re-enter connection credentials, the extra property gets dropped.) I have reduced and increased the Rows per Batch and Maximum insert commit size values to zero avail. I have also set the RetainSameConnection property to True for all the Connection Managers. I'm at a loss! (As you can see from the pics, both jobs finish "successfully".)
This code returns all records:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE
FROM <TABLE_NAME>
This code returns only 28k:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE,
ORIGIN_STR_LINE1,
ORIGIN_STR_LINE2,
ORIGIN_STR_LINE3,
ORIGIN_CITY,
ORIGIN_COUNTY,
ORIGIN_NATION,
ORIGIN_STATE,
ORIGIN_ZIPCODE,
EMAIL,
HIGH_SCHOOL_CODE,
HIGH_SCHOOL_CODE_DESC,
HIGH_SCHOOL_CITY,
HIGH_SCHOOL_STATE,
HIGH_SCHOOL_GPA,
HIGH_SCHOOL_RANK,
PRIOR_COLLEGE_CODE,
PRIOR_COLLEGE_CODE_DESC,
PRIOR_COLLEGE_DEGREE_CODE,
PRIOR_COLLEGE_DEGREE_CODE_DESC,
PRIOR_COLLEGE_CITY,
PRIOR_COLLEGE_STATE,
ADMIT_FLAG,
GENERAL_STUDENT_FLAG,
CURRENT_ENROLLMENT_FLAG,
LETTER_CODES,
CONTACT_CODES,
COMMENT_CODES,
DIRECTORY_EMAIL,
ADDR_DIVISION_CODE,
HIGH_SCHOOL_CLASS_SIZE,
ETHNICITY,
RACE_CODE,
REGULATORY_RACE,
INT_LANG
FROM <TABLE_NAME>
Troubleshooting steps from the comments
If you run the all column version of the query in sql developer (whatever the Oracle query tool is) using the same credentials as the SSIS package, do you get 28k rows or 1M?
1M records are returned in SQL Developer when I use the same credentials SSIS is using. –
As painful as it may be, I would add 1 column, run, observe results. The first time you see a drop in row count, interrogate the heck of the source data (data type, collation, whether some permission thing is at play). If nothing seems out of place, edit the question to include the full table definition and identify what the first source column is that is throwing the results off.
I've done that. Column by column. I've even added a column that already existed (ADDR_STR_LINE1) as ORIGIN_STR_LINE1 and just aliased it, knowing that ADDRR_STR_LINE1 had already worked and both fields shared the exact datatypes/lengths etc. I just ran it with this code:SELECT PIDM, ORIGIN_STR_LINE1, ORIGIN_STR_LINE2, ORIGIN_STR_LINE3, ORIGIN_CITY, ORIGIN_COUNTY, ORIGIN_NATION, ORIGIN_STATE, ORIGIN_ZIPCODE FROM ODSMGR.RECRUIT_PERSON_OSU and it returned 1m records.
While little, consolation, you hitting all the troubleshooting steps I'd employ. I suppose the next item I would try to rule out is some bizarre row width issue/bug. Add a new data flow. As your source query, take one of your varchar2(4000) fields and duplicate it 60 times i.e. SELECT ADDR_STR_LINE1 AS Col0, ADDR_STR_LINE1 AS Col1, ..., ADDR_STR_LINE1 As Col59 FROM Owner.Table and connect that to a Derived Column task (it doesn't need to do anything, just serve as an anchor point) and run it. Do you get 1M or 28k?
Adding more of my troubleshooting steps. 1) Created a view off the original table, casting all of the fields that would need to be truncated as VARCHAR(proper length based on dest table). 2) Added/substracted fields piecemeal, until I thought I had a stable query, knowing that if I added <this fields>, <this many rows> would be dropped. But, for instance, I added PRIOR_COLLEGE_CITY and the first time, my counts dropped from 1063202 to 952755, but then later, I ran it again, and the counts dropped from 1063202 to 953989, so even if it was a data issue (it's not) it's not a consistent one.
Once I got my 953989 rows into the destination table, I compared which PRIOR_COLLEGE_CITY records were missing. In the Source Data Flow, I explicitly queried for those records, and they loaded fine, so again, not a data issue.
According to the picture you provided, when Source component output the records, some records have lost, so we could determine that this problem occurs in Source component.
In this case, please try to check the following thing in your case.
1.Run the query(not the views but the query inside the view) in your Oracle environment when execute the query in Source component, then check whether the number of records(returned from Oracle environment)is equal to the number of records(returned from SSIS Source component). Do this on a separate data task.
2. Check if there are some changes on the source table.
3. If the returned results is correct when running the query in Oracle environment, please try to compare the correct results with the SSIS Source returned results, and analyze the missing data.
I had a similar problem, mostly with odbc driver for oracle!
The problem not only lies on the volume of or rows that returns but in my case, for some reason it grouped the values of the first column also.
The only solution I ve found is to use another driver besides odbc and oledb.
Using the native Oracle Destination and Oracle Source in VS2017 it worked perfect and also the performance was better than odbc and ole db.
enter image description here
I was having a similar issue: 1,470,491 rows in the Oracle view that I was querying, all would come across when I run the package in Visual Studio, but only 377,257 rows would be read when I ran the package from SQL Agent. I tried the SQLChick "UseSessionFormat" hack that you mentioned. While editing the connection string used by the job (it comes in from configuration) I noticed that the connection string in the package had a "USERNAME" parameter as well as a "user id" paramter, but the configuration used by SQL Agent only had "USERNAME". I added "user id" parameter to the configuration used by SQL Agent and after that, the job retrieves all 1,470,491 rows.

Access 2000 - invalid character value for cast specification (#0) - Access to SQL

Lately I have migrated my Access 2000 backend data and tables to a 2012 SQL server. In the access frontend I have linked the SQL tables that were migrated. Most of it is working fine except for (now) one form.
In this form the data is being loaded from the SQL server using this query:
SELECT * FROM qryAbonementens WHERE EindDatum is null or EindDatum>=now()
It also used a filter and sort:
((Lookup_cmbOrderNummer.Omschrijving="GJK"))
And the sort:
Lookup_cmbOrderNummer.Omschrijving
These things may be irrelevant but Ill just post as much as possible.
The data loads in the form perfectly, however when I try to change a record in the form, I keep getting the:
error invalid character value for cast specification (#0)
While checking out posts with the same problem I encountered this post:
MS Access error "ODBC--call failed. Invalid character value for cast specification (#0)"
This made me believe that I was missing a PK somewhere so First I checked the linked table in Access design mode:
Tekst = text, Numeriek = numeric, Datum/tijd = date (sorry for it being dutch).
The same table in SQL looked like this:
They both have PK so I guess this is not the problem.
Though, when looking at both datatypes you can see 2 differences on the InkoopPrijs and VerkoopPrijs fields. In SQL these two are decimals(30,2) and in the design view in the linked access table they are, I guess unknown, and so they are being cast to text values. Perhaps this is the cause of my error message?
The record I am trying to change and which gives the error is this one (but it is on all the records):
I've read somewhere that adding a timestamp field to the SQL server could help but I have no clue it also works in my case or how to do this.
As you have guessed, the decimal(30, 2) columns are the problem.
They are too large for Access to be used as numbers.
I can reproduce the problem with Access 2010, although I can enter numeric data into the field. But when I enter text, I get the exact same error message.
decimal(18,2) works fine (it's the default decimal precision for Sql Server 2008).
Surely you don't have prices in the 10^30 range? :-)
You might also consider using the money datatype instead, although I don't know how well Access 2000 works with that.
Alright I got it fixed. #Andre451 post about changing the 30,2 decimal values in the SQL server to 18,2 gave me the the record is changed by another user error. This caused me to look differently at the problem and instead of fixing the
error invalid character value for cast specification (#0)
error I looked at the
record was changed by another user error
I came across this post: Linked Access DB "record has been changed by another user"
Here someone suggested to add a TimeStamp field to the SQL table. So I did and now it seems to work again! And it also seems to work with the original (decimal 30,2) value!

ColdFusion 8 + MSSQL 2005 and CLOB datatype on resultset

The environment I am working with is CF8 and SQL 2005 and the datatype CLOB is disabled on the CF administrator. My concern is, will there be a performance ramification by enabling the CLOB datatype in the CF Administrator.
The reason I want/need to enable it is, SQL is building the AJAX XML response. When the response is large, the result is either truncated or returned with multiple rows (depending on how the SQL developer created the stored proc). Enabling CLOB allows the entire result to be returned. The other option I have is to have SQL always return the XML result in multiple rows and have CF join the string for each result row.
Anyone with some experience with this idea or have any thoughts?
Thanks!
I really think that returning Clob data is likely to be less expensive then concatenating multiple rows of data into an XML string and then parsing it (ick!). What you are trying to do is what CLOB is designed for. JDBC handles it pretty well. The performance hit is probably negligible. After all - you have to return the same amount of character data either way, whether in multiple rows or a single field. And to have to "break it up" on the SQL side and then "reassemble" it on the CF side seems like reinventing the wheel to be sure.
I would add that questions like this sometimes mystify me. A modest amount of testing would seem to be able to answer this question to your own satisfaction - no?
I would just have the StoredProc return the data set, or multiple data sets, and just build the XML the way you need it via CF.
I've never needed to use CLOB. I almost always stick to the varchar datatype, and it seems to do the job just fine.
There are also options where you could call the Stored Proc, which triggers MSSQL to generate an actual xml file (not just a string) and simply return you the file name. Then you can use CFFILE action="read" to grab the xml string and parse it accrodingly. Assuming your web server and db have a common file storage area.

SQL Server 2000 invalid Floating Point Operation

When I run my .net app connected to SQL Server 2000, I get the error "invalid Floating Point Operation". I did search for the error cause http://fugato.net/2005/02/08/sql-server-nastiness found this link which says there may be Bogus Data in one of the columns.
I have a backup from a month old data, when I connect to the old database it works fine.
How do I filter out the bogus data in the table?
One method might be to use a cursor to perform the problematic operation row by row printing id's as you go along, when the error occurs you can refer to the printed id's to see which row contains the erroneous data.
There is probably a better way however, this is just what came to mind!
You need to narrow down your problem:
When exactly does this happen? What SQL query is being executed that causes the problem??
Once you have the query - look at what the query does; check those tables - are you possibly converting a VARCHAR field into a numeric value, and some values aren't all numeric?
Or are you reading data using a SqlDataReader and you're not paying attention to the fact the data contained in SQL Server might not be what you expect??
select col
from tbl
group by col
having count(col) = 1
there's probably only one instance of the bad value
Do you know which table(s) are causing the problem? Can you run a SELECT * FROM... against these tables + get results? If so, select from the tables and order by each of the FLOAT / REAL / NUMERIC / DECIMAL columns and look at the ends to see if there are any 'oddities'.

Linked Informix table in MS SQL Server ignoring criteria?

I’m having a problem with a linked Informix table in MS SQL Server 2008r2. When I query this table, it seems to ignore some of the criteria I’m passing to it but not others. For example if I put a condition on the rowdate field the remote query part of the execution plan does not show any WHERE clause but if I put criteria on another field such as ACD it does show.
It seems it does not pass any criteria on the rowdate field but does on all others.
I know the field is indexed on the Informix side. If it helps the table I’m linking is from Avaya CMS and it is linked via the OpenLink ODBC driver.
EDIT:
As far as I know it is Informix Dynamic Server 2000 and it is on Solaris. The column comes up as a DATE data type which is correct. I have tried passing the criteria as ‘2010-08-03 00:00:00’, ‘2010-08-03’, CONVERT(date,’2010-08-03’) and a few more variations. When the data is returned to SQL server it is in the format yyyy-mm-dd.
When I view the execution plan I can see the remote query with all the other criteria followed by a filter for only the rowdate field.
I know that rowdate is indexed and that the driver does normally communicate that information as we use it in other applications (Business objects and MS Access) and they don’t have a problem
I managed to figure it out but it is the strangest thing ever. I went down the route of passing the date in different formats. My default is to use the normal YYYY-MM-DD that of course did not work so I tried YYYY-MMM-DD, still nothing. After going through LOTS of combinations I found one that works Mmm-DD-YY and it has to be exactly that! SEP-21-2010 wont work but Sep-21-2010.
I wonder if this is just a strange hang up from Informix or something in the driver, anyway it works.
On a side note has anyone noticed how strange it is that people from the America write the date month, day year? Stop and think about it for a second, do you say the number 2410 as “Four hundred, ten, two thousand”? The best part about it is try asking yourself this, what day is American independence day? Most Americans say “That’s easy you limey person it is the 4th of July” hmmmm day month (year), the only date they say round the right way is the date they got their independence. I will leave it up to the SO community to see the irony in that
Example query below:
select *
from OPENQUERY (AVAYA, 'select row_date,starttime,intrvl,acd from root.hagent where
row_date = ''NOV-22-2012'' and acd = 1 and split = 1 and starttime = 1900')
By the way I managed to extract accurate data via both MMM-DD-YYYY and Mmm-DD-YYYY.

Resources