I am a newbie in informatica tool.
I run a workflow to insert data from database A, table A.A to database B, table B.B. Session did succeed.
And I met a problem in log file:
Database errors occurred:
Execute -- Informatica' ODBC 20101 driver27376073
Execute -- ODBC Driver Manager Function sequence error
When last step to insert data to B.B
It is only 1 row inserted per workflow running time. Example: I have 7 rows, only 1 row inserted and 6 rows rejected.
I search for 27376073 error code but I found nothing about it.
Can anyone help me solve this problem, please?
are you using aggregator transformation cheak groupby port marked or not
and if source is a FF and fixed width we need to do some settings in session task
Good day,
I've got the following basic stuff (I hope not too basic). Correct me where it's not right.
For querying a table within the same database I understand can be used:
SELECT <field>
FROM <table>
WHERE <condition>
When querying a table in another DB within the same server I understand that can be used:
SELECT <field>
FROM [database].<table>
WHERE <condition>
When querying a table in another DB in another server I understand that can be used:
SELECT <field>
FROM [server\instance].[database].<table>
WHERE <condition>
QUESTION: What's the correct syntax to successfully query data from a ODBC connection (excel file) in another connection (sql)?
In VS2012 I've got the following DB, (in which I opened a query):
[serverName\sqlexpress].[DBname].dbo
Also I've opened the following ODBC connection:
ODBC.EXCEL.[localPath]\[fileName].xls
My intention is to populate a DB table from an excel ODBC data.
I've attempted to copy the entire ODBC string as the server name, but an error was thrown.
If can be provided an entire query script to do so in the form:
SELECT <fields>
INTO <sql db table>
FROM <ODBC table>
would be very much appreciated.
Thanks for your attention.
I am trying to do a incremental load from Sql server 2008 to Netezza (Nps6) using SSIS.
Netezza 5.x version OLEDB driver used. I am using Table or View - Fast Load option with Maximum insert commit size = 0.
Here I am trying to insert few thousands of records to a Netezza table. This destination table contains millions of records. This Data flow task was taking a hours to complete. When I looked into the Netezza Administrator Active Queries I could see that a query like below was the problem,
SELECT * FROM Destination_Table;
The next step is an external table load like below,
insert into "destination_table"(col1, col2, col3)
select c0, c1, c2 from external '/dev/null' (c0, c1, c2) using (
remotesource odbc' delimiter ' ' escapechar '\' ctrlchars 'yes' crinstring 'yes' timeroundnanos 'yes' encoding 'internal' maxerrors 1
) ;
Can anyone help me understand why a SELECT * FROM the Destination Table is required for load. Or how a Netezza OLEDB driver works with SSIS.
Appreciate your help.
Without looking at details in your package, the behavior which you have explained occurs if you have not selected the Table or View -fast load option for your Data access mode in your OLE DB Destination component. The fast load option would internally use a BULK INSERT for uploading data into the destination table.
Using the Table or view behaves like a SELECT * and pulls all the columns. This access mode should be used only if you need all the columns of the table or view from the source to the destination.
The problem for you is that this option might not be appearing for you by default, since you are using Netezza.
See issue discussed here along with possible workarounds:
http://social.msdn.microsoft.com/Forums/en-US/sqlintegrationservices/thread/965b6d83-cf5e-405b-8784-7981e4386adc
Official bug report raised here:
https://connect.microsoft.com/SQLServer/feedback/details/569087
After installing OLEDB 6.x version this "SELECT * FROM DESTINATION TABLE" issue is not occurring. I could see a good performance improvement with OLEDB 6 version. But, If we are working on OLEDB 5.x version, i believe it is better to load to a stage table and then load to the destination table
I have a XML column in a table and it is defined by a schema. I am trying to insert values into this table by using Insert into tbl1 Select * from tbl for xml. But this is failing due to schema validation failure for one of the records. But i want to insert the records which have passed the validation atleast and i can capture the others later. Can someone help me in this.
SQL server validates all dataset, not single row. If you want to validate Row-by-Row using SQL server tools, methods are:
SQLCLR (fastest) link
SSIS (easy to create) - using loop FOREACH you try to insert row into table. All failed rows are redirecting to another table.
TSQL TRY/CATCH Block - insert xml from single row to schema validated variable. Slowest one.
I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.