I am trying to create an ADF activity that can truncate the target snowflake table before a for each loop copies the blob files to it. I can't use the pre copy because it will clean up the table in each iteration. Interestingly when i use a lookup to truncate the table it throws an error hat the ODBC query is not valid however it does the job at database level and truncates the table. Has anyone encountered a similar error.
Yes we had this same issue with ADF using the ODBC connector. It seems to be a known bug. The same thing happens for any UPDATE/INSERT/DELETE statement.
Solutions we used:
Use the newer Native Linked Service Connector. Recommended.
Do your truncate in a procedure and use a Lookup activity to just call the procedure
Prior to the native connector, in some places we would follow up the Truncate activity with a "On Completion" dependency, and have that do a lookup to validate that the table is actually empty so that it could throw an error and stop the pipeline if it didn't work. This was not ideal because it requires an extra ping to the database, and some wacky ADF flow management.
Related
Today I am trying to figure out how to get rows with identity columns inserted into a Microsoft SQL 2016 database via an SSIS package that I am constructing using MVS 2015 (with SSDT 14.0.61709.290). I am pulling data from another data source (which works without issue) and I am inserting rows into a destination table that has been previously defined on the destination SQL server like so:
create table [DB_NAME].[dbo].[TableName]
(
key_value IDENTITY(1,1) primary key,
...other values...
)
GO
When I move values from the old data source to the new data source I get the error:
[MSQL Deal [70]] Error: Open Database Connectivity (ODBC) error
occurred. state: '23000'. Native Error Code: 544. [Microsoft][SQL
Server Native Client 11.0][SQL Server]Cannot insert explicit value for
identity column in table 'TableName' when IDENTITY_INSERT is set to
OFF.
There are a tremendous number of forums and results that come up when searching that indicate that there should be a checkbox that permits identity inserts when modifying the column mappings page of the destination. This option does not exist and the "Advanced Editor" interface in MVS/SSDT 2015/2017 has column mappings only and no options for handling inserts for identity columns.
Also I have tried to add a step to my control flow that turns identity insert on, but for some reason enabling IDENTITY_INSERT at this level does not work and my package still fails on all insert attempts.
Now I am going to be completely honest, I am fully aware that I have alternative options to get this to work - but keep in mind I am building dev test and production databases that I am trying to keep scripted and automated and idiot proof for when it gets further down the line toward deployment. I don't want to have to introduce an intermediate step that forces one of our DBAs to wait for the first SSIS package to finish, run a SQL query that will enable identity inserts for a specific table, run the next package, then run a query to disable identity inserts. I would have to do this many times....
Did SSIS 2015 (and I tried this using MVS/SSDT 2017) completely drop support for identity inserts? Do I have to use a different interface with my DSN to get this to work (ODBC?)?
Is this still an option but it is hidden somewhere really really really well?
ODBC Destination has no option for identity insert, you can use an OLEDB Destination instead if it, and it is contains a Keep Identity check box, which can be used
I need a bit advice how to solve the following task:
I got a source system based on IBM DB2 (IBMDA400) which has a lot of tables that changes rapidly and daily in structure. I must load specified tables from the DB2 into a MSSQL 2008 R2 Server. Therefore i thought using SSIS is the best choice.
My first attempt was just to add both datasources, drop all tables in MSSQL and recreate them with a "Select * Into #Table From #Table". But I was not able to get this working because I could not connect both OLEDB Connections. I also tried this with an Openrowset statement but the SQL Server does not allow that for security reasons and I am not allowed to change that.
My second try was to manually read the tables from the source and drop and recreate the tables with a for each loop and then load the data via the Data Flow Task. But I got stuck on getting the meta data from the Execute SQL Task... so i dont got the column names and types.
I can not believe that this is too hard to archieve. Why is there no "create table if not exist" checkbox on the Data Flow Task?
Of course i searched for the problem here before but could not find a solution.
Thanks in advance,
Pad
This is the solution i got at the end:
Create a File/Table which is used for selection of the source tables.
Important: Create a linked Server on your SQL Instance or a working Connectionstring for the OPENROWSET (i was not able to do so - i choosed the linked server)
Query source File/Table
Build a loop through the resultset
Use Variables and Script Task to build your query
Drop the destination table
Build another Querystring with INSERT INTO TABLE FROM OPENROWSET (or if you used linked Server OPENQUERY)
Execute this Statement
Done.
As i said above i am not quite happy with this but for now it should be ok. I will update this if i got another solution.
I get this error when I do a openquery select to a linked server using an providex odbc driver. The database I am trying to connect to is built on Progress.
Cannot get the current row value of column "[MSDASQL].IVD_PRICE" from OLE DB provider "MSDASQL" for linked server "FCEU". Conversion failed because the data value overflowed the data type used by the provider.
Is there a work around for this? I do not have access to the server I am trying to query.
Thanks!
The Progress database implements all datatypes as variable length. The "format" is just a suggestion for default display purposes. Progress applications routinely ignore that suggestion and "over-stuff" fields.
This gives most SQL clients hissy fits.
The cure depends on the version of Progress/OpenEdge.
All versions of Progress starting with version 9 support a utility called "dbtool" which will scan the db and adjust the "SQL-WIDTH" attribute for any fields that have been over-stuffed. You must run this on the server. (Or convince the DBA to do it.)
http://knowledgebase.progress.com/articles/Article/P24496
This is a very common, routine procedure for Progress databases.
You can also use the -checkwidth parameter to keep these things from happening -- but in your case the horse is already out of the barn and it might break the application. So it probably isn't useful to you right now.
Starting with OpenEdge 11.5 there are new features to automatically handle width violations when a SQL client connects:
http://knowledgebase.progress.com/articles/Article/How-to-enable-Authorized-Data-Truncation-in-a-JDBC-or-ODBC-connection
I use TOS to transfer a SQL Server table to another SQL Server. That works more or less. But I have one issue with truncating the table. In the properties for the output table I define "Truncate Table" for the table action and "Insert" for the data action. At the second run I get a lot of duplicate key errors. If I run the "TRUNCATE TABLE" manually in the SQL Server Management Studio, the job works fine.
Are there any known issues with truncate table? Talens Version is 5.3.2
Thanks in advance
I mimicked the scenario and it works fine in Talend Platform for data Management version 5.6.1. I cannot test it on the TOS, but perhaps you can upgrade to the newest TOS version and try again. To be thorough I tried it using separate connection components and built-in connections. The only difference is that using separate connection object requires a commit object.
The workaround I recommend is this:
create a proc to truncate your table and call it from a tMSSqlSP component
connect this to your original subjob which transfers the data between the two tables using an OnSubJobOK flow.
In your tMSSqlOutput component (which performs the truncate/insert) in for Action on Table use Default (so it will not truncate the table)
for Action on data use Insert
I tried this method and it works. This workaround will save you the time and frustration of dealing with the TOS issue.
I need suggestion on best approach from below listed options. I need to validate excel file data and load it to SQL Server
Validations include
Non Duplicate columns
Mandatoty fields present
Fields not present in Database
In case of error I would write in errorlog table in database
Below is my approach
Load the Data into a Temp Table in Database
Run the Validations
Log the Error
On success load it to main tables
Please let me know if you have any other better ideas for this scenario
Here are couple of approaches that are possible:
Using SSIS
Create excel connection manager then use dataflow task with OLEDB Source, lookup transform (to eliminate the records NOT needed), OLEDB destination
directly into main table.
You can also choose to redirect or ignore rows that do not satisfy the transformations.
(use can use bulk insert task if the excel is really large instead of dealing RBAR)
2. Using TSQL
BULK INSERT or BCP or use OPENROWSET into staging table. Beware that you need to have approriate drivers installed (JET for x32 or ACE for x64 SQL Server).
Then do error handling by logging to error table (raiseerror, try-catch) before loading to main table.