How to remove "FOR UPDATE OF" syntax either in DASHDB or OpenOfficeCalc? - analytics

I am connecting to DashDB thru an ODBC connection in OpenOfficeCalc. I can correctly display tables.
However, I am not able to display any data, because OpenOffice appends FOR UPDATE clause to SELECT statement, making DashDB send back SQL1667, clause not supported for ORGANIZED BY COLUMN tables.
I don't see any option to remove this in OpenOffice, is there a way to tell DashDB to ignore such clauses ?

Related

Simple SSIS from Oracle to SQL Server drops rows

I have a very simple SQL query in my SSIS (VS 2017) Data Flow. It connects to Oracle via Native OLE DB\Oracle Provider for OLE DB and uses SQL Command to query the Oracle view. The destination table is a SQL Server 2017 table. If I query only the first 20 columns or so (I am querying 57 columns), I get all 1,060,000ish records. As I start to add more columns, the rowcount drops. I have already removed any date fields from both tables, and have done quite a few data conversions (source table has several varchar2(4000) fields that need to be SUBSTR to reasonable lengths in the SQL destination table. All fields in the destination table are nullable. When I pull the SQL out of SSIS and run it in SQL Developer, I get the right row count. When I run it in SSIS, it drops from 1.06 M rows to around 28k. I already tried the SQLChick hack (https://www.sqlchick.com/entries/2012/9/2/resolving-missing-records-in-ssis-from-oracle-source.html) doesn't work and causes connection errors (I had to use VS Code to add that property to my Oracle connection, then when I went back to VS, the connection was broken. When opening it back up to re-enter connection credentials, the extra property gets dropped.) I have reduced and increased the Rows per Batch and Maximum insert commit size values to zero avail. I have also set the RetainSameConnection property to True for all the Connection Managers. I'm at a loss! (As you can see from the pics, both jobs finish "successfully".)
This code returns all records:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE
FROM <TABLE_NAME>
This code returns only 28k:
SELECT
PIDM,
STUDENT_ID,
LAST_NAME,
FIRST_NAME,
MIDDLE_NAME,
LFM_NAME,
FML_NAME,
SORT_NAME,
GENDER,
ETHNIC_CODE,
ETHNIC_CODE_DESC,
LEGACY_CODE,
LEGACY_CODE_DESC,
ADDR_STR_LINE1,
ADDR_STR_LINE2,
ADDR_STR_LINE3,
ADDR_CITY,
ADDR_COUNTY,
ADDR_STATE,
ADDR_NATION,
ADDR_ZIPCODE,
ADDR_AREA_CODE,
ADDR_PHONE,
ORIGIN_STR_LINE1,
ORIGIN_STR_LINE2,
ORIGIN_STR_LINE3,
ORIGIN_CITY,
ORIGIN_COUNTY,
ORIGIN_NATION,
ORIGIN_STATE,
ORIGIN_ZIPCODE,
EMAIL,
HIGH_SCHOOL_CODE,
HIGH_SCHOOL_CODE_DESC,
HIGH_SCHOOL_CITY,
HIGH_SCHOOL_STATE,
HIGH_SCHOOL_GPA,
HIGH_SCHOOL_RANK,
PRIOR_COLLEGE_CODE,
PRIOR_COLLEGE_CODE_DESC,
PRIOR_COLLEGE_DEGREE_CODE,
PRIOR_COLLEGE_DEGREE_CODE_DESC,
PRIOR_COLLEGE_CITY,
PRIOR_COLLEGE_STATE,
ADMIT_FLAG,
GENERAL_STUDENT_FLAG,
CURRENT_ENROLLMENT_FLAG,
LETTER_CODES,
CONTACT_CODES,
COMMENT_CODES,
DIRECTORY_EMAIL,
ADDR_DIVISION_CODE,
HIGH_SCHOOL_CLASS_SIZE,
ETHNICITY,
RACE_CODE,
REGULATORY_RACE,
INT_LANG
FROM <TABLE_NAME>
Troubleshooting steps from the comments
If you run the all column version of the query in sql developer (whatever the Oracle query tool is) using the same credentials as the SSIS package, do you get 28k rows or 1M?
1M records are returned in SQL Developer when I use the same credentials SSIS is using. –
As painful as it may be, I would add 1 column, run, observe results. The first time you see a drop in row count, interrogate the heck of the source data (data type, collation, whether some permission thing is at play). If nothing seems out of place, edit the question to include the full table definition and identify what the first source column is that is throwing the results off.
I've done that. Column by column. I've even added a column that already existed (ADDR_STR_LINE1) as ORIGIN_STR_LINE1 and just aliased it, knowing that ADDRR_STR_LINE1 had already worked and both fields shared the exact datatypes/lengths etc. I just ran it with this code:SELECT PIDM, ORIGIN_STR_LINE1, ORIGIN_STR_LINE2, ORIGIN_STR_LINE3, ORIGIN_CITY, ORIGIN_COUNTY, ORIGIN_NATION, ORIGIN_STATE, ORIGIN_ZIPCODE FROM ODSMGR.RECRUIT_PERSON_OSU and it returned 1m records.
While little, consolation, you hitting all the troubleshooting steps I'd employ. I suppose the next item I would try to rule out is some bizarre row width issue/bug. Add a new data flow. As your source query, take one of your varchar2(4000) fields and duplicate it 60 times i.e. SELECT ADDR_STR_LINE1 AS Col0, ADDR_STR_LINE1 AS Col1, ..., ADDR_STR_LINE1 As Col59 FROM Owner.Table and connect that to a Derived Column task (it doesn't need to do anything, just serve as an anchor point) and run it. Do you get 1M or 28k?
Adding more of my troubleshooting steps. 1) Created a view off the original table, casting all of the fields that would need to be truncated as VARCHAR(proper length based on dest table). 2) Added/substracted fields piecemeal, until I thought I had a stable query, knowing that if I added <this fields>, <this many rows> would be dropped. But, for instance, I added PRIOR_COLLEGE_CITY and the first time, my counts dropped from 1063202 to 952755, but then later, I ran it again, and the counts dropped from 1063202 to 953989, so even if it was a data issue (it's not) it's not a consistent one.
Once I got my 953989 rows into the destination table, I compared which PRIOR_COLLEGE_CITY records were missing. In the Source Data Flow, I explicitly queried for those records, and they loaded fine, so again, not a data issue.
According to the picture you provided, when Source component output the records, some records have lost, so we could determine that this problem occurs in Source component.
In this case, please try to check the following thing in your case.
1.Run the query(not the views but the query inside the view) in your Oracle environment when execute the query in Source component, then check whether the number of records(returned from Oracle environment)is equal to the number of records(returned from SSIS Source component). Do this on a separate data task.
2. Check if there are some changes on the source table.
3. If the returned results is correct when running the query in Oracle environment, please try to compare the correct results with the SSIS Source returned results, and analyze the missing data.
I had a similar problem, mostly with odbc driver for oracle!
The problem not only lies on the volume of or rows that returns but in my case, for some reason it grouped the values of the first column also.
The only solution I ve found is to use another driver besides odbc and oledb.
Using the native Oracle Destination and Oracle Source in VS2017 it worked perfect and also the performance was better than odbc and ole db.
enter image description here
I was having a similar issue: 1,470,491 rows in the Oracle view that I was querying, all would come across when I run the package in Visual Studio, but only 377,257 rows would be read when I ran the package from SQL Agent. I tried the SQLChick "UseSessionFormat" hack that you mentioned. While editing the connection string used by the job (it comes in from configuration) I noticed that the connection string in the package had a "USERNAME" parameter as well as a "user id" paramter, but the configuration used by SQL Agent only had "USERNAME". I added "user id" parameter to the configuration used by SQL Agent and after that, the job retrieves all 1,470,491 rows.

Entity Framework and SQL Server OUTPUT clause

I'd like to use SQL OUTPUT clause to keep history of the records on my database while I'm using Entity Framework. To achieve this, EF needs to generate the following example for a DELETE statement.
Delete From table1
output deleted.*, 'user name', getdate() into table1_hist
Where field = 1234;
The table table1_hist has the same columns as table1, with the addition of two columns to store the name of the user who did the action and when it happened. However, EF doesn't seem to have a way to support this SQL Server's clause, so I'm lost on how to implement that.
I looked at EF's source code, and the DELETE command is create inside a internal static method (GenerateDeleteSql in System.Data.Entity.SqlServer.SqlGen.DmlSqlGenerator class), so I can't extend the class to add the behavior I want. It looks like I'll have to rewrite the SQL Server provider based on the existing code, but that is something I'd like to avoid...
So, my question is if there's another option to do this (an extension, for example) or do I have to rewrite this provider?
Thank you.
Have you considered either
Using Stored Procedures to encapsulate your data logic
A delete trigger to capture the data
Change Data Capture (Enterprise edition only)
not actually deleting the data - merely setting a flag in the data to mark it as deleted.

How to force sql server to ignore insert on computed column instead of throwing error?

I have a table to which I would like to add computed column (with values different for each user - needed for permissions).
Problem is that this table is part of Microsoft Dynamics NAV which don't know anything about computed columns.
I've managed how to cheat NAV so that I change the column type after NAV creates it and I can read the data.
Now I'm stuck with inserts.
NAV don't use nullable columns so it always tries to insert default value and SQL Server fails with error on computed column.
I've tried to write INSTEAD OF INSERT trigger but seems that SQL Server is doing the check before it runs the trigger and still fails with error.
Is there any way to force SQL Server to ignore inserted value on computed column?
Personally I wouldn't change the schema of a third-party application, especially a financial system. Instead of changing the tables you could create views - you can even create them in another database if you want - that include your computed column definition, then put your INSTEAD OF triggers on the views and do INSERTs through the views.

SSIS - How do I use a resultset as input in a SQL task and get data types right?

I am trying to merge records from an Oracle database table to my local SQL table.
I have a variable for the package that is an Object, called OWell.
I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell.
The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50.
I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment:
Insert into WELL (WELL_ID, WELL_NAME)
Select OWELL_ID, OWELL_NAME
from OWell
where OWELL_ID not in
(select WELL.WELL_ID from WELL)
For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set.
I am getting the following error:
Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME)
Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?
Your problem is that you are trying to read an object variable into a sql task, and refer to that variable in the sql task.
To do what you are trying to do, you can use a foreach loop task. You can set the enumerator of a for each to an object (recordset) variable and map its columns to variables that you can then pass as parameters into your sql task. Your sql code in the example above has another flaw in that you are trying to reference a variable in your package as if it were a table in your database. You need to change your sql to be something like Insert into well(?,?)
This approach however leaves out the step where you can check to see if the records exists before you insert it. A better overall approach would be to do this all in a dataflow.
Do everything you are doing in your select from Oracle dataflow. At the last step, instead of using a recordset destination pointing to variable USER::OWell, add a lookup from the local sql table. Set your sql statement there to be select WELL.WELL_ID from WELL. On the columns tab in your lookup match Well_ID from your dataflow (fields on the left) to Well_ID from your lookup (fields on the right) by dragging the well_id field from the left to the right to form a connector between the boxes. At the bottom of the dialog box, click on Configure Error Output and set the error column value for the lookup output row to be Redirect Row. Choose OK to save and close this lookup. Next, add a oledb destination to the data flow and connect it to the error output of the lookup (the red arrow). Point the destination to the sql table and map the columns from the dataflow to the appropriate columns in the output table. This will pass the rows from the oracle dataflow that do not exist in the sql table into the bulk insert of the sql table.
To infer missing rows we either used a lookup task and then directed the unfound rows to an ordinary OLEDB destination (you just don't supply the identity column, obviously) or (where we were comparing a whole table) the SQLBI.com TableDifference component and routed the new rows to a similar OLEDB destination.
Individual INSERTs in SQL Command task aren't terribly quick.

MS Access Database Check Box List Filters Missing On SQL Server back end

When I connect Access 2007 to SQL Server (whether by ADO recordset or by linked table) I no longer get check box lists (of available filter values) in the datasheet column filters.
Is this feature available only with MDB/ACCDB and/or DAO?
I think the check box in datasheet view of native Access tables is governed by the "Display Control" property in the table design. I don't recall what's available when the table is in SQL Server. If you provide a form in "datasheet view", you should be able to bind a check box control to the SQL Server column.
Edit: I think I misunderstood your question yesterday. If you click the Office Button, select Current Database, then put a check in the "ODBC fields" box under "Filter lookup options" ... does that do what you want?
I know we're breaking protocol by not opening a new question, but I'm going to answer this nevertheless so this thread will be complete. This is a more complete answer than the previous ones.
I think I have this topic nailed down now.
The lookup filters won't work with a recordsource that is not an Access object, and they don't work in linked tables directly.
You have to create a query of the linked table, for example: Select * from tblOrders, and use that query as the recordsource in order to get the lookup filters.
HOWEVER, I found a more flexible approach as well. I create passthrough queries to SQL/Server and use that as my recordsource. Then, in code, I set the SQL of the passthrough queries like this:
Currentdb.QueryDefs("qpstOrders").SQL="Select * from Orders where OrderID =" & Me.OrderID
In the current event of my subform, I change the query on the fly to pass the appropriate record -- or it can just be a more generic query. The lookup filters work fine this way and the interaction with SQL/Server is lightning fast.
Open the database that you want to optimize.
Click File > Options to open the Access Options dialog box.
In the left pane of the Access Options dialog box, click Current Database.
In the right pane, under Filter lookup options, mark "ODBC Fields" check box.

Resources