I am editing this question because I was not clear enough.
I started maintaining an SSIS package, and I haven't worked on SQL Server for years, and I see this code and don't know how SSIS is dealing with it:
There is a table called "employees", and we are running this query:
UPDATE t SET IsProcessed = 1
OUTPUT INSERTED.RecordId, INSERTED.SiteId, INSERTED.EmployeeId, INSERTED.FirstName, INSERTED.MiddleName,
INSERTED.LastName, INSERTED.IsProcessed
FROM [mydatabase].[dbo].[employees] AS t
WHERE IsProcessed = 0
So, does SSIS only shows the output of the command, or it takes that output as an input to the next step?
Can you help please?
This will list all the row values that were updated. The after (the new updated value) columns are listed. If the user wanted the previous values they could have listed deleted.column-name. This is a form of auditing the changes.
Related
I am a newbie in informatica tool.
I run a workflow to insert data from database A, table A.A to database B, table B.B. Session did succeed.
And I met a problem in log file:
Database errors occurred:
Execute -- Informatica' ODBC 20101 driver27376073
Execute -- ODBC Driver Manager Function sequence error
When last step to insert data to B.B
It is only 1 row inserted per workflow running time. Example: I have 7 rows, only 1 row inserted and 6 rows rejected.
I search for 27376073 error code but I found nothing about it.
Can anyone help me solve this problem, please?
are you using aggregator transformation cheak groupby port marked or not
and if source is a FF and fixed width we need to do some settings in session task
I have created a package in ssis in which i use some date-variables inside my SQL statements ( i.e declare #DateIn ="2018-02-22" and declare #DateTo = "2018-03-22"), in order to load the corresponding data inside the tables of the data warehouse.
What I need to do is to create a task or a different package, which will give me the possibility to define externally the values of these variables, every time i run it, in order to fill in the tables of the warehouse with the data that corresponds to the dates i set every time.
From what I've read, I should maybe use a script task or an execute sql task or parameters
Could you help me please? Or could you suggest me a good tutorial/link?
I have found plenty but can't decide if they meet the needs of what i am describing above.
Thank you
Create DTSX package with variables #DateStart and #DateEnd
Create table containing 3 columns DateStart, DateEnd, Active
Create stored procedure that reads DateStart, DateEnd where Active = 1 from your newly created table and does an alter on the SQL Server Job updating your variables value that are inside of your DTSX package with your desired value using sp_update_jobstep
See link
Ex of command:
dtexec /f YourPackage.dtsx
/set \package.variables[DateStart].Value;myvalue
/set \package.variables[DateStart].Value;myvalue
Add sp_start_job inside the stored procedure to start the job with the new variable values.
Create job with 1 step containing the execute of the stored procedure from Step 3
All you need to do is update the values from your table created in Step 2 and then execute job to run the stored procedure to update DTSX job exec command and start it. You can trigger this from a website and control the tables values from textboxes.
Also specific Permissions are required and the SP that updates the SQL Agent job needs to be run by Sysadmin
Good question by the way for the new learner!
There are many ways for this scenario,few of them I have mentioned below.
1-Create variable in variable pane #DateIn and #DateTo for storing the date and data type will be date.
Now put 2 entry in Excel ,text or xml for these two variables and call it by using foreachloop container and assign this to variables.
2-Create a SQl table in which you can store those values either by manually on daily basis or load the table with excel ,text ,xml or csv file and call the table in Execute SQL Task and select the result set and pass the result set values to the variables.
I hope it will solve your problem.
I'm developing Pentaho job to get data from BigQuery and insert into SQL Server. The job is quite simple as you can see below but during insert to a SQL Server table process in thrown 'Data truncation' error. Then I checked max length for this column. It is just 64 while in database it is nvarchar(500). Moreover that I want to know how is look like then for error records I log into text file. You can see it below. I've spent for 3 days with this problem but still not get an answer yet. Please do guide me.
What I have done so far
String cut step to sub string
String Operation step to trim
put left function in SELECT statement
put REGEXP_REPLACE(uuid, ' ', '') which remove spaces in SELECT statement.
All I have done getting the same error.
Pentaho job
Error records in text file
I have been solved this problem. It is my stupid mistake. I just recreate table and put more number for length of that column.
My case
post_name nvarchar(50) -> nvarchar(150)
So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.
We are using Ado library in MS Visual C++ to use MS SQL database in the following way:
_CommandPtr pCmd;
...
pCmd->CommandText = “update …”;
pCmd->Execute( &lRowsAffected, 0, adExecuteNoRecords );
After executing the update command, the lRowsAffected variable gives us the number of affected rows, which is exactly what we want. However, if in MS SQL a trigger starting with a select into command is defined for the update command, we get the number of rows selected by the select into command as the value of lRowsAffected. Instead of this, we would like to know how many rows were affected by the update command, how could we achieve this?