I am just wondering, can I find out if somebody wrote a query and updated a row against specific table in some date?
I tried this :
SELECT id, name
FROM sys.sysobjects
WHERE NAME = ''
SELECT TOP 1 *
FROM ::fn_dblog(NULL,NULL)
WHERE [Lock Information] LIKE '%TheOutoput%'
It does not show me ?
Any suggestions.
No, row level history/change stamps is not built into SQL Server. You need to add that in the table design. If you want an automatic update date column it would typically be set by a trigger on the table.
There is however a way if you really need to find out what happened in a forensics scenario. But that is only available if you have the right backup plans. What you can do then is to use the DB transaction log to find when the modification was done. Note that this is not anything an application can or should do runtime.
I am developing a SSIS package in bids and getting an inconsistent error SSIS "Insert bulk failed due to a schema change of the target table nOT CONSISTENT for one of my dataflows. It is successful sometimes but also fails sometimes giving the above mentioned error.
I am not sure what is happening.
Following is storedproc which is called from oledb source
CREATE PROCEDURE [dbo].[getPartiesIpoData_SSIS]
AS
BEGIN
SELECT
c.companyId 'companyId',
tpf.transactionPrimaryFeatureName 'transactionPrimaryFeatureName',
os.statusdatetime 'statusdatetime',
st.statusName 'statusName'
FROM ciqCompany c
inner JOIN ciqTransOffering t
ON t.companyId = c.companyId
JOIN ciqTransOfferToPrimaryFeat ttp
ON ttp.transactionId = t.transactionId
JOIN ciqTransPrimaryFeatureType tpf
ON tpf.transactionPrimaryFeatureId = ttp.transactionPrimaryFeatureId
JOIN ciqtransofferingstatustodate os
ON os.transactionId = t.transactionId
JOIN ciqtransactionstatusType st
ON st.statusId = os.statusId AND st.statusId = 2
WHERE tpf.transactionPrimaryFeatureId = 5
CREATE NONCLUSTERED INDEX IX_PartiesIpoData_companyId on CoreReferenceStaging.dbo.PartiesIpoData(companyId) with (DROP_EXISTING =on)
END
The destination schema is as follows
The oledb destination set in SSIS is as follows
Below is some of the action plan you can try, this helped in my case though.
1) Drop the Constraints before the its run and recreate them after the run
2) Disable the Auto update stats (To isolate the issue)
3) Check if any parallel index rebuilds happening.
4) check with without using "fast load" option
If still the issue persists after implementing the above change, collect the Profiler trace to capture the activity when it is failing to further investigation.
Check This
also check setting for SQL Server Destination Adapter
If still the issue persists after implementing the above change, collect the Profiler trace to capture the activity when bcp is failing to further investigation.
We are using SQL Server 2012 and SSDT 2010 for the development and debug purpose for SSIS packages.
I have a simple data flow task where it has two components only- OLE DB source and OLE DB target. The target table stores data for multiple dates and is loaded incrementally from the source table i.e. whenever the source table receives data with a new date, it is loaded into the target table. There is no transformation, calculation or logic applied on the data flow.
The source OLE DB uses the following query to select data from the source-
SELECT * FROM source_table WHERE date_col NOT IN
(SELECT DISTINCT date_col FROM target_table);
In the OLE DB target page, it is set as Fast Load and Table Lock option is also unchecked.
Whenever we are executing the package from SSDT, it only shows some rows in figures in the data flow path as processing and goes into the never ending stage. The figure doesn't grow and the package never ends until stopped forcefully.
Thanks in advance.
Try substituting sub query with a left join as below:
SELECT a.* FROM source_table a
left join target_table b
on a. date_col=b.date_col where b.date_col is null
I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..
June 29, 2010 - I had an un-committed action from a previous delete statement. I committed the action and I got another error about conflicting primary id's. I can fix that. So morale of the story, commit your actions.
Original Question -
I'm trying to run this query:
with spd_data as (
select *
from openquery(IRPROD,'select * from budget_user.spd_data where fiscal_year = 2010')
)
insert into [IRPROD]..[BUDGET_USER].[SPD_DATA_BUD]
(REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, BUD_ORIG_AMT, BUD_APPR_AMT)
SELECT REC_ID, FISCAL_YEAR, ENTITY_CODE, DIVISION_CODE, DEPTID, POSITION_NBR, EMPLID,
spd_data.NAME, JOB_CODE, PAY_GROUP_CODE, FUND_CODE, FUND_SOURCE, CLASS_CODE,
PROGRAM_CODE, FUNCTION_CODE, PROJECT_ID, ACCOUNT_CODE, SPD_ENC_AMT, SPD_EXP_AMT,
SPD_FB_ENC_AMT, SPD_FB_EXP_AMT, SPD_TUIT_ENC_AMT, SPD_TUIT_EXP_AMT,
spd_data.RUNDATE, HOME_DEPTID, lngOrig_amt, lngAppr_amt
from spd_data
left join Budgets.dbo.tblAllPosDep on project_id = projid
and job_code = jcc and position_nbr = psno
and emplid = empid
where OrgProjTest = 'EQUAL';
Basically I'm selecting a table from IRPROD (an oracle db), joining it with a local table, and inserting the results back on IRPROD.
The problem I'm having is that while the query runs, it never stops. I've let it run for an hour and it keeps going until I cancel it. I can see on a bandwidth monitor on the SQL Server data going in and out. Also, if I just run the select part of the query it returns the results in 4 seconds.
Any ideas why it's not finishing? I've got other queryies setup in a similar manner and do not have any problems (granted those insert from local tables and not a remote table).
You didn't included any volume metrics. But I would recommend to use a temporary table to gather the results.
Then you should try to insert the first couple of rows. If this succeeds you'll have a strong indicator that everything is fine.
Try to break down each insert task by project_id or emplid to avoid large transactions logs.
You should also think about crafting a bulk batch process.
If you run just the select without the insert, how many records are returned? Does the data look right or are there multiple records due to the join?
Are there triggers on the table you are inserting into? If you are returning many records and triggers are on the table that are designed to run row-byrow this could be slowing things down. You are also sending to another server, so the network pipeline could be what is slowing you down. Maybe it would be better to send the budget data to the Oracle server and do the insert from there rather than from the SQL Server.