sql script cancelled by user after a couple of seconds - sql-server

I ran a script ( running manually ) having at the beginning of its code some set values like this:
DECLARE #blahblah int
SET #blahblah = 0
The above value has a key role in the changes I would like to make. It creates new rows/entries in some table.
The table on the database I ran this script is huge / the database is also.
My question: Let's say I had put a wrong value to #blahblah, I ran the script and after 2 seconds I cancelled it immediatly. Then I corrected the #blahblah value and ran it again.
DId the trigger managed to change some entities/rows ( in the 2 seconds time ) or it was cancelled entirely?

Depends, whether you have wrapped your DML inside a transaction block or not. If you have used transaction then probably the changes have been rolledback and your DB state should be consistent else, there may be partial changes happened to your table(s).
you should check and verify the same against your table(s).

Related

How to deduct a delete in postgresql

I have a weird problem where data in a postgresql table gets cleared completely daily. I don't see any scripts or anything that runs the delete query. Is there a way i can find which script/process deletes the data from the particular table?
You can see more about log_statement parameter. Set it to all so, all queries executed will be saved and you can do an analysis in what is happening with your database.
If you're on linux environment, the logs (normally) will be in /var/log/postgresql/local0.* files. And you can see what's happening.
Also, you can do more improvements on logs (like choose log only delete statements), as described on official documentation below.
More info at: https://www.postgresql.org/docs/9.5/runtime-config-logging.html
You will need to restart you postgres to the changes take effect.
Finally found a way to log the delete query even without restarting the server
Write a trigger function as below
CREATE OR REPLACE FUNCTION log_delete() RETURNS trigger AS $BODY$
BEGIN
RAISE LOG 'Deleting row % (statement is %)', OLD, current_query();
RETURN NULL;
END;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
Call the trigger function by
CREATE TRIGGER ondelete
AFTER DELETE
ON public.tablename
FOR EACH ROW
EXECUTE PROCEDURE public.log_delete();
And that's it. your log will have the delete query that fired!
Ref : Postgresql - Determining what records are removed from a cascading delete

Change Data Capture (CDC) cleanup job only removes a few records at a time

I'm a beginner with SQL Server. For a project I need CDC to be turned on. I copy the cdc data to another (archive) database and after that the CDC tables can be cleaned immediately. So the retention time doesn't need to be high, I just put it on 1 minute and when the cleanup job runs (after the retention time is already fulfilled) it appears that it only deleted a few records (the oldest ones). Why didn't it delete everything? Sometimes it doesn't delete anything at all. After running the job a few times, the other records get deleted. I find this strange because the retention time has long passed.
I set the retention time at 1 minute (I actually wanted 0 but it was not possible) and didn't change the threshold (= 5000). I disabled the schedule since I want the cleanup job to run immediately after the CDC records are copied to my archive database and not particularly on a certain time.
My logic for this idea was that for example there will be updates in the afternoon. The task to copy CDC records to archive database should run at 2:00 AM, after this task the cleanup job gets called. So because of the minimum retention time, all the CDC records should be removed by the cleanup job. The retention time has passed after all?
I just tried to see what happened when I set up a schedule again in the job, like how CDC is meant to be used in general. After the time has passed I checked the CDC table and turns out it also only deletes the oldest record. So what am I doing wrong?
I made a workaround where I made a new job with the task to delete all records in the CDC tables (and disabled the entire default CDC cleanup job). This works better as it removes everything but it's bothering me because I want to work with the original cleanup job and I think it should be able to work in the way that I want it to.
Thanks,
Kim
Rather than worrying about what's in the table, I'd use the helper functions that are created for each capture instance. Specifically, cdc.fn_cdc_get_all_changes_ and cdc.fn_cdc_get_net_changes_. A typical workflow that I've used wuth these goes something below (do this for all of the capture instances). First, you'll need a table to keep processing status. I use something like:
create table dbo.ProcessingStatus (
CaptureInstance sysname,
LSN numeric(25,0),
IsProcessed bit
)
create unique index [UQ_ProcessingStatus]
on dbo.ProcessingStatus (CaptureInstance)
where IsProcessed = 0
Get the current max log sequence number (LSN) using fn_cdc_get_max_lsn.
Get the last processed LSN and increment it using fn_cdc_increment_lsn. If you don't have one (i.e. this is the first time you've processed), use fn_cdc_get_min_lsn for this instance and use that (but don't increment it!). Record whatever LSN you're using in the table with, set IsProcessed = 0.
Select from whichever of the cdc.fn_cdc_get… functions makes sense for your scenario and process the results however you're going to process them.
Update IsProcessed = 1 for this run.
As for monitoring your original issue, just make sure that the data in the capture table is generally within the retention period. That is, if you set it to 2 days, I wouldn't even think about it being a problem until it got to be over 4 days (assuming that your call to the cleanup job is scheduled at something like every hour). And when you process with the above scheme, you don't need to worry about "too much" data being there; you're always processing a specific interval rather than "everything".

SQL Server trigger to update data automatically

I'm designing a library management database schema, let's say there is a table "Borrow"
Borrow
id
user_id
book_id
borrow_date
due_date
isExpired
expired_day (number of days after the book is expired)
fine
Can the SQL Trigger implement the following circumstances?
1.Compare the due_date with Today, if it's same-->send email-->mark isExpired to true
2.If isExpired is marked to true-->compare the difference between today and due_date, and update expired_day--->update fine (expired_days * 5)
A trigger only fires when something happens on the table or row. It won't fire continuously (or daily). If nothing happens to the table then your trigger will never fire so your checks can't be done.
So, the trigger you describe would work when you first insert a record into the row, but there's no automatic way with a trigger for it to fire after the due date period to check for the expiry and fine.
You would most likely need to setup a stored procedure that contained your code and find a way to run that on scheduled basis.
The following link goes over how to set that up:
Scheduled run of stored procedure on SQL server
Since you want to check all the records of the library daily and want them to be updated accordingly, it is better to make a daily job and schedule an agent and set a particular time so that this daily job would be executed everyday automatically.
pls Note : You should keep in mind to choose that time when you feel your application would be least used during the entire day.
Creation of Agent : http://msdn.microsoft.com/en-us/library/ms181153(v=sql.105).aspx

SQL Server trigger an asynchronous update from trigger?

If a user inserts rows into a table, i would like SQL Server to perform some additional processing - but not in the context of the user's transaction.
e.g. The user gives read access to a folder:
UPDATE Folders SET ReadAccess = 1
WHERE FolderID = 7
As far as the user is concerned i want that to be the end of the atomic operation. In reality i have to now go find all child files and folders and give them ReadAccess.
EXECUTE SynchronizePermissions
This is a potentially lengthy operation (over 2s). i want this lengthy operation to happen "later". It can happen 0 seconds later, and before the carbon-unit has a chance to think about it the asynchronous update is done.
How can i run this required operation asychronously when it's required (i.e. triggered)?
The ideal would be:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTEASYNCHRONOUS SynchronizePermissions
or
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissions WITH(ASYNCHRONOUS)
Right now this happens as a trigger:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissions
and the user is forced to wait the 3 seconds every time they make a change to the Folders table.
i've thought about creating a Scheduled Task on the user, that runs every minute, and check for an PermissionsNeedSynchronizing flag:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
UPDATE SystemState SET PermissionsNeedsSynchronizing = 1
The scheduled task binary can check for this flag, run if the flag is on:
DECLARE #FlagValue int
SET #FlagValue = 0;
UPDATE SystemState SET #FlagValue = PermissionsNeedsSynchronizing+1
WHERE PermissionsNeedsSynchronizing = 1
IF #FlagValue = 2
BEGIN
EXECUTE SynchronizePermissions
UPDATE SystemState SET PermissionsNeedsSynchronizing = 0
WHERE PermissionsNeedsSynchronizing = 2
END
The problem with a scheduled task is:
- the fastest it can run is every 60 seconds
- it's suffers from being a polling solution
- it requires an executable
What i'd prefer is a way that SQL Server could trigger the scheduled task:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissionsAsychronous
CREATE PROCEDURE dbo.SynchronizePermissionsAsychronous AS
EXECUTE sp_ms_StartWindowsScheduledTask #taskName="SynchronousPermissions"
The problem with this is:
- there is no sp_ms_StartWinodowsScheduledTask system stored procedure
So i'm looking for ideas for better solutions.
Update: The previous example is a problem, that has has no good solution, for five years now. A problem from 3 years ago, that has no good solution is a table that i need to update a meta-data column after an insert/update. The metadata takes too long to calculate in online transaction processing, but i am ok with it appearing 3 or 5 seconds later:
CREATE TRIGGER dbo.UpdateFundsTransferValues FOR INSERT, UPDATE AS
UPDATE FundsTransfers
SET TotalOrderValue = (SELECT ....[snip]....),
TotalDropValue = (SELECT ....,[snip]....)
WHERE FundsTransfers.FundsTransferID IN (
SELECT i.FundsTransferID
FROM INSERTED i
)
And the problem that i'm having today is a way to asychronously update some metadata after a row has been transitionally inserted or modified:
CREATE TRIGGER dbo.UpdateCDRValue FOR INSERT, UPDATE AS
UPDATE LCDs
SET CDRValue = (SELECT ....[snip]....)
WHERE LCDs.LCDGUID IN (
SELECT i.LCDGUID
FROM INSERTED i
)
Update 2: i've thought about creating a native, or managed, dll and using it as an extended stored procedure. The problem with that is:
you can't script a binary
i'm now allowed to do it
Use a queue table, and have a different background process pick things up off the queue and process them. The trigger itself is by definition a part of the user's transaction - this is precisely why they are often discouraged (or at least people are warned to not use expensive techniques inside triggers).
Create a SQL Agent job and run it with sp_start_job..it shouldn't wait for completion
However you need the proper permission to run jobs
Members of SQLAgentUserRole and SQLAgentReaderRole can only start jobs
that they own. Members of SQLAgentOperatorRole can start all local
jobs including those that are owned by other users. Members of
sysadmin can start all local and multiserver jobs.
The problem with this approach is that if the job is already running it can't be started until it is finished
Otherwise go with the queue table that Aaron suggested, it is cleaner and better
We came across this problem some time ago, and I figured out a solution that works beautifully. I do have a process running in the background-- but just like you, I didn't want it to have to poll every 60 seconds.
Here are the steps:
(1) Our trigger doesn't run the db update itself. It merely throws a "flag file" into a folder that is monitored by the background process.
(2) The background process monitors that folder using Windows Change Notification (this is the really cool part, because you don't have to poll the folder-- your process sleeps until Windows notifies it that a file has appeared). Whenever the background process is awoken by Windows, it runs the db update. Then it deletes the flag file(s), goes to sleep again and tells Windows to wake it up when another file appears in the folder.
This is working exactly as you described: the triggered update runs shortly after the main database event, and voila, the user doesn't have to wait the extra few seconds. I just love it.
You don't necessarily need to compile your own executable to do this: many scripting languges can use Windows Change Notification. I wrote the background process in Perl and it only took a few minutes to get it working.

Is there an automatic way to generate a rollback script when inserting data with LINQ2SQL?

Let's assume we have a bunch of LINQ2SQL InsertOnSubmit statements against a given DataContext. If the SubmitChanges call is successful, is there any way to automatically generate a list of SQL commands (or even LINQ2SQL statements) that could undo everything that was submitted at a later time? It's like executing a rollback even though everything worked as expected.
Note: The destination database will either be Oracle or SQL Server, so if there is specific functionality for both databases that will achieve this, I'm happy to use that as well.
Clarification:
I do not want the "rollback" to happen automatically as soon as the inserts have succesfully completed. I want to have the ability to "undo" the INSERT statements via DELETE (or some other means) up to 24 hours (for example) after the original program finished inserting data. We can ignore any possible referential integrity issues that may come up.
Assume a Table A with two columns: Id (autogenerated unique id) and Value (string)
If the LINQ2SQL code performs two inserts
INSERT INTO Table A VALUES('a') // Creates new row with Id = 1
INSERT INTO Table A VALUES('z') // Creates new row with Id = 2
<< time passes>>
At some point later I would want to be able "undo" this by executing
DELETE FROM A Where Id = 1
DELETE FROM A Where Id = 2
or something similar. I want to be able to generate the DELETE statements to match the INSERT ones. Or use some functionality that would let me capture a transaction and perform a rollback later.
We cannot just 'reset the database' to a certain point in time either as other changes not initiated by our program could have taken place since.
It is actually quite easy to do this, because you can pass in a SqlConnection into the LINQ to SQL DataContext on construction. Just run this connection in a transaction and roll that transaction back as soon as you're done.
Here's an example:
string output;
using (var connection = new SqlConnection("your conn.string"))
{
connection.Open();
using (var transaction = connection.StartTransaction())
{
using (var context = new YourDataContext(connection))
{
// This next line is needed in .NET 3.5.
context.Transaction = transaction;
var writer = new StringWriter();
context.Log = writer;
// *** Do your stuff here ***
context.SubmitChanges();
output = writer.ToString();
}
transaction.Rollback();
}
}
I am always required to provide a RollBack script to our QA team for testing before any change script can be executed in PROD.
Example: Files are sent externally with a bunch of mappings between us, the recipient and other third parties. One of these third parties wants to change, on an agreed date, the mappings between the three of us.
Exec script would maybe update some exisiting, delete some now redundant and insert some new records - scope_identity used in subsequent relational setup etc etc.
If, for some reason, after we have all executed our changes and the file transport is fired up, just like in UAT, we see some errors not encountered in UAT, we might multilaterally make the decision to roll back the changes. Hence the roll back script.
SQL has this info when you BEGIN TRAN until you COMMIT TRAN or ROLLBACK TRAN. I guess your question is the same as mine - can you output that info as a script.
Why do you need this?
Maybe you should explore the flashback possibilities of Oracle. It makes it possible to travel back in time.
It makes it possible to reset the content of a table or a database to how it once was at a specific moment in time (or at a specific system change number).
See: http://www.oracle.com/technology/deploy/availability/htdocs/Flashback_Overview.htm

Resources