Sybase trigger to find the deleted query - sybase

In my Sybase server, some rows of a table (TBL_RESOURCE) are deleting from unknown source at random intervals. I tried a lot, but I cannot able to locate from which source/file/process this data is deleting. Is there any mechanism to locate this problem? I need to find out who is deleting these rows..
How we can find out who is deleted it and from which file?
Can we use a trigger to find the source of deletion?

Ok, so you do not have stored procs or transactions (which would allow the normal security: grant permissions to sprocs only; no direct updates to tables from users). therefore you have direct grants to users. Which means they can insert/update/delete from any client-side program, including Excel. Therefore it is quite possible that there is no code segment in the source code of the app, that deletes from the table. Having rows deleted at random moments is the nature of an online database; protecting it from unauthorised deletes is the requirement of the DBA.
I presume you have given permissions to specific people, not the whole world, and you are not sure exactly who is doing the nasty. The easiest is to simply ask the group.
The next easiest is to turn on Auditing for that table, or for the group (or role) of users permitted. But if you have not set up auditing, than can pose an obstacle.
Third, the trigger.
There are other methods, but they have a substantial overhead (22%), require substantial implementation labour, and you will have to wade through massive amounts of data.
If your environment is as insecure and unstable as it sounds, and the table is not supposed to be deleted from, simply revoke permissions on that (one) table, and wait until some one comes to you crying that their permissions have changed.
"This is assuming you don't have every single user logging in as DBA or some other [privileged] account."
Which of course is a very silly thing to do, asking for, pleading for disaster. As silly as granting delete on all tables to all users. I see where you are coming from.

Something like this would do the trick.
create trigger deltrig
on TBL_RESOURCE
for delete
as
BEGIN
insert TBL_LOG (modifiedBy, modifiedDate)
select user_name(), getdate() from deleted
END
(you have to create the logging table TBL_LOG obviously)

yes, you can use triggers. see the sybase doc to see how to create delete triggers. In the trigger code, you can choose to log information (insert) like the current user, user id etc into a a table for auditing.

Related

Create audit table for a big table with a lot of columns in SQL Server

I know this question has been asked many times. My question here is I have a table which is around 8000 records but with around 25 columns. I would like to monitor any changes we make in this table. my server is only 2008.
We usually create an audit table for the specific table we monitor and record any changes into that using cursors as we usually have a lot of columns to monitor. But I don't want that this time!
Do you think instead of cursors, I can use a trigger to create a table called audit table XYZ and monitor changes in it having columns like field name, old value, new value, update_date, username?
Many thanks!
Short answer
Yes, absolutely use triggers over cursors. Cursors have a bad reputation for being misused and performing terribly, so where possible, avoid using them
Longer answer
If you have control over the application which is reading/writing to this table, consider have it build the queries for auditing instead. The thing to watch out for with an INSERT/UPDATE/DELETE trigger (which I assume is what you're going for) is that it's going to increase your write time for queries on that table, whereas writing the audit in its own query will avoid this (there is a caveat that I'll detail in the next paragraph). A consideration you also need to make is how much metadata the audit table needs to contain. For example, if your application requires users to log in, you may want to log their username to the audit table, which may not be available to a trigger. It all comes down to the purpose the audit table needs to serve for your application.
An advantage that triggers do have in this scenario is that they are bound to the same transaction as the underlying query. So if your INSERT/UPDATE/DELETE query fails and is rolled back, the audit rows which were created by the trigger will also be rolled back along with it, so you'll never end up with an audit entry for rows which never existed. If you favour writing your own audit queries over a trigger, you'll need to be careful to ensure that they are in the same transaction and get rolled back correctly in the event of an error

SQL Server - Rolling back particular transaction only at a later date

I have SQL Server 2014, standard edition. We have several tables where we delete data from, then re-insert it under different primary keys (to merge records for two people in our system that are actually the same). All these changes are performed with a T-SQL transaction.
I understand how transactions and rollbacks work, but what I need is more of an audit/rollback since my users may need to rollback just this transaction only at a later date (not restoring the whole database or table). "Change Data Capture" is not an option since I only have standard edition.
My real question lies in how to store this auditing information. I imagine I'll need a unique key to keep track of this being one unit of work so all these table changes get tied to same group as far as the user is concerned. But if I have a DELETE WHERE ID = #ID query for example, how do I store all these deleted records before deleting so that I can re-insert them later if needed? I'm fine with even storing a large rollback T-SQL script of some kind, I'm just not sure how to generate INSERT scripts that I can store and run later for data that I'm about to delete.
I'm open to any ideas, I just need an architecture that's generic enough to handle multiple tables and the ability to rollback deletions and insertions. I care more about the rollback ability than keeping a pretty audit table.
You can not do that out of the box as even with full logging you can roll back an entire database to a point in time but not specific transactions.
You will have to code something for un-doing transactions but I believe simple audit triggers will give you the data you need to make it happen. Here is a good article to get you started.
https://www.mssqltips.com/sqlservertip/4055/create-a-simple-sql-server-trigger-to-build-an-audit-trail/

Test data inserted to Prod Database how do i delete that record

I mistakenly inserted few test records using PROD GUI, which got written to PROD Database. Is there a way to find what tables and column did those record touched ?
Thanks
I suppose you don't have a running trace, CDC or other tracking mechanism enabled. So it seems like the following steps would be a reasonable solution:
Make sure that you can't find and drop that data from the
Application GUI
Run SQL Profiler Trace using Tuning Template (it will give you enough information). Include ApplicationName and HostName columns to identify your connection.
Insert one more test record using UI (try to do the same operations as you did before)
Stop the trace and find the data you've inserted in it.
Identify other modification which was done from your application using ApplicationName, HostName, and SPID.
Create a SQL Script to delete those records.
Identify records which you had inserted before (probably they were inserted into the same tables)
Write a query to delete them too
Open transaction
Delete those records
Check that you have deleted only needed records
Commit transaction
UPD: according to the comment to this answer (with which I completely agree), if you have DEV or TEST environment on which you can do the same operation, do it and find modified records there. After that find modified records in the same tables on PROD.
P.S. I can not guarantee that following these steps you will be able to clean the data you've inserted, but probably you will be able to do this. I also recommend creating a full backup before deleting data.
If you have proper transaction logging enabled and using SQL Server 2008 and above. You can try using the Change Data Capture Stored Procedures (Transact-SQL) and check the changes happened to the tables. Hope it helps
Well you could track through the code to see what tables it touches. Run Profiler on dev to see what code it sends or procs it calls when enter a new records the same way that you did on prod.
If you have formal PK and FK relationships, you likely will find out by trial and error because it won't let you delete the parent records until all the children are deleted. And test with some other record on the dev environment to figure out what tables might be involved. Or you could script the FKs to see what other tables are related to the parent table.
If you have auditing (as every Enterprise solution should have, but I digress), you can often find out by looking in the audit tables for transactions at that time. Our audit tables have both the datetime of the transaction and the user which makes it easier to filter for these things.
Of course if you know your data model, you should have a pretty good idea before you start. Or if you have a particular id that is in all the child tables and you do not have nice convenient FKs, then you could check out the system tables to find what tables have that column name. That assumes a fairly standard naming convention though. If you call the same column different things in different tables, yo u might miss some.
If you are using an ORM, there should be some way to check what tables are in the object related to the particular task you did. So if you inserted a test order of instance, check out what is contained in the order object.

Append only to table in SQL Server to record immutable events and improve overall performance?

I need to record immutable events in a SQL Server table. How can the following be achieved?
Mark a table as append only
Prevent edits on the table for everyone (similar to #1)
Allow deletes on the table for specific users
Not lock the table for appends
Calculate a hash for a varchar 255 to be used as a secondary index
Improve read, write and indexing performance
Are there performance benefits or any potential side effects to attempting to do this?
Note that the question is asked from a non sql guru perspective so some of the items might overlap.
Mark a table as append only
Prevent edits on the table for everyone (similar to #1)
Allow deletes on the table for specific users
If you don't want to use application level security or if it's not appropriate because you'll be connecting to the DB directly rather than through a service, use SQL Server's security to do accomplish this.
Create a database role in the database for each type of user. Create an Append role, grant the role INSERT (and SELECT if it's suitable) permissions to the table. Create a Delete role, grant the role DELETE (and SELECT and INSERT if it's suitable) permissions to the table. Then, add the Logins to the server and the associated Users in the database, and assign the database roles to the created Users. The Logins should only be members of the public built-in role. Now the users are blocked by security.
There is no method to make a table actually append-only. Users with the db_owner role will always be able to update or delete from the table. You can create an INSTEAD OF UPDATE trigger, but a user in db_owner can always disable the trigger. You can't stop sysadmin logins or db_owner users from being able to UPDATE the table if they're malicious. They can just take the permissions you denied and disable the security measures you put in place.
Not lock the table for appends
Ensure all indexes on the table are created with ALLOW_ROW_LOCKS = ON and/or ALLOW_PAGE_LOCKS = ON. That should eliminate almost all table locks since the query engine can use row level locking or page level locking instead.
Beyond that, you cannot wholly eliminate locking on an INSERT. Locks are how the database ensures atomicity and concurrency. That said, I can't think of a situation where multiple INSERT statements would cause a deadlock on their own, but it can happen once you combine DELETE statements.
Calculate a hash for a varchar 255 to be used as a secondary index
You want to maintain your own index? Why does this matter? If it's a checksum generated by an external application, then I would expect the field to be just another data field in the table with an index in the database.
If you want the database to index your data in multiple ways, the correct way to do that is to create multiple indexes in the database based on the queries you will need to run.
If you want to duplicate the effect of an index by making a column and using a function in the RDBMS to populate it with values so you can search for it, then I suppose you can use CHECKSUM() or HASHBYTES(), but this strikes me as a questionable design that's likely to have performance issues.
Are you just saying you want to create a surrogate key of some kind?
Improve read, write and indexing performance
There are literally hundreds of technical manuals written on this topic. There are consultants and experts who get paid very large salaries just to answer this question all day every day. It is too broad. It depends on your server (memory, disk, CPU), your network, your application, the amount of data you generate, the amount of data you store, how time-sensitive the data needs to be, the number of concurrent users, how you will insert the data, how you will query the data, etc.
It's like asking a bridge engineer, "How can I be sure the bridge I want to build won't fall down?"
"The best way is to become a bridge engineer."
This site can help with specific instances of performance issues.

Migrate and Merge several databases into one

In an update project i have to do the following:
Move 3 databases from SQL2000 to SQL2005 and merge them at the same time. There are already quite a few cross database queries used in SP's and Views.
The current plan is to move each of the old databases into a separate schema in 1 database.
That means we will also have to change our current SP's and Views, we now have:
SELECT OrderId, OrderDate FROM Sales.dbo.Orders
and expect we will have to change that into
SELECT OrderId, OrderDate FROM Sales.Orders
The question is: how do we do that as automated as possible?
I know about SED and similar for changing the scripts. I would welcome tips about how to be 'smart' about this, like strategies for partitioning the scripts, performance (tons of INSERT INTO lines) etc.
Note: I did look at the Import/Export Wizard but apparently I would have to set the Schema manually on each output table and fix the SP's through ALTER scripts anyway.
I did this a couple of years ago, and I ran into a few problems that you want to be aware of.
Assumptions:
You've got a single SQL 2000 database server with 3 databases, A/B/C
You want all of the objects to end up in SQL 2005 in database A (we'll refer to that as the Target)
You want to get rid of databases B and C eventually (the old Sources)
You don't have a full-blown test environment where you can automatically restore your production databases every day, and script this again and again until it's right. (That's the best way, and I've taken that approach too, but it's labor-intensive.)
Here's my hard lessons learned:
Don't do the merge and the SQL 2005 change the same day. Either do the merge before you go to 2005, or after, but don't try to accomplish it all in a single outage. It'll be a finger-pointing mess. If it was me, I'd go to 2005 first just to get it out of the way. That way, I know anything that breaks isn't because of a schema change, and those types of breaks are easier to fix. You want at least a week of end user activity on the 2005 box before you declare victory and move on to the merge.
Build the new objects in Target ahead of time. Even if they're not being queried in your live production apps, go ahead and build 'em now. That way you can populate fake test data in there to test your applications ahead of time. Yes, this means mixing live and test data, but frankly, you're already out there working without a net. Be wary of identity fields, though, since you can end up with conflicting records with the same identity number but different data in the Target and Source databases.
Create views in Target ahead of time. You mentioned that you've got views that already do cross-database queries. Copy those from Source to Target now, and tell any other developers (report guys, power users) to start referring to the Target views instead. This isn't going to speed up your own work, but it speeds up THEIR work. If you can get to the point where you can verify that they're only hitting Target (even though the Target views still point to tables in Source) then it'll make troubleshooting easier on migration day. Then you can start denying permissions on the Source views ahead of time.
Sync tables ahead of time. Make a list of all of the tables that need to be moved out of the Sources, and for each one, analyze how it's being updated. If it's only being inserted into (not updated or deleted), like a log table, then write a T-SQL script to start keeping it in sync in Target. Run that script via a SQL Agent job during periods of low activity on your server, like nightly. This way, when it's go-live day, you won't have to push as many records around, meaning your go-live window will be smaller and your Target transaction logs can stay smaller. Tables that are being constantly updated or deleted aren't as easy, and it's up to you whether you decide to sync those as well. We did it for any tables over a million lines.
Check for record conflicts between the Source databases. It sounds like this one doesn't apply to you specifically, but I'm noting it here in case anybody else does a merge and they're reading it for tips. If you have more than one Source database, dump out the list of objects. If you've got two objects with the same name, check their schema. I've worked with instances where they had a State or Region table in each database, and they were supposed to be identical, but they had identity fields for their primary keys. Each child table (like Customers, which linked to a Region table) referred to the parent table (Region) by the primary key (identity field) - which didn't match from one database to the other. In that case, the smart thing to do is take an outage window ahead of time, before the migration day, to clean those records up with manual update scripts.
Disable any constraints or foreign key relationships
Change the identity fields (if they're lookup tables, you may be able to turn off the identity stuff and just run with manually specified pk numbers)
Modify the Region table to add a NewID field, matching to what it's going to become, and an OldID field, showing what it used to be
Update all of the child tables (Customers) to use the NewID number instead of the original
Update the Region table so that the real ID field now has the NewID value, and the OldID field has what the Region used to be. (You're probably going to screw something up like miss a child table you didn't know about, and you're going to wonder what it used to be.)
Break the migration into pieces. List every stored proc in all of the databases. If any of them can be moved without moving data, do that first. For example, if you've got Source.dbo.usp_RunReport, and it only refers to tables in the Target database, then do that in a first phase. If you've got small system lookup tables that are only used internally in your app, not visible to customers or reports, then put that in the first phase too. It sounds like it's too small to bother with, but the idea is to reduce the amount of panic on migration day. The less you wonder about, the better you can troubleshoot. We moved every static lookup table (State, Region, Calendar, etc) over ahead of time. The amount of work required in Phase 1 - just moving those small, static tables - got management to understand how huge it was going to be to move the rest, and it bought us resources and time we wouldn't have gotten otherwise.
Pre-grow the data files for Target. If you're not using SQL 2005's new Instant File Initialization, data file growths take quite a while. Enable Instant File Initialization if you've got a choice, then grow the data files to make sure they're not fragmented. If they just grow naturally during your migration day, they can be fragmented. If you can't use Instant File Initialization, you still need to pre-grow the files, but you want to do that ahead of time during periods of low activity to speed up the maintenance window.
On migration day, run your inserts one table at a time, or smaller. You want to keep your insert transactions as tight as possible. The smaller your insert transactions, the less space you'll need in the transaction log. Remember that the transaction log will grow with insert statements even in simple mode. After every round of inserts, do a sanity check to make sure that they worked, and that you're not going to run out of drive space for data files or t-log files.
After the updates finish, change security on the Source databases. Put every non-SA login into the dbdenydatareader and dbdenydatawriter roles in the Source databases. That way they can still log in if they've hard-coded the database name in the connection string, but they won't be able to do anything. This makes your troubleshooting easier too: if an app or a query runs into problems, you could consider taking their login out of the deny roles and see if it works - if it does, it's borked. The risk with that is that they might run a transaction that uses the Source database data to update the Target database (get customers from Source, update them in Target) and it might cause issues.
Other options for the Source databases are:
Rename them, so you can still query 'em but the apps won't touch 'em
Detach them, but keep the files available in case you need to troubleshoot
Strip out all logins, and use new logins to access the existing databases just in case. Then if somebody's read-only report is totally borked, you can let it work temporarily by issuing them a new login and telling them it's referring to the wrong database.
After the updates finish, rebuild indexes & statistics on Target. If you're just doing continuous inserts, this isn't a big deal, but if you're merging multiple databases (like two Sales databases that had been broken up into regions of the country) then you'll want to clean things up.
IMHO, use one schema unless you can justify a gain from multiple schemas. This last one is just my two cents, but it sounds like you're going through an awful lot of work to go from 3 databases 1 schema each, to 1 database with 3 schemas. If you're not really sure about the 3 schema thing, you might consider using 1 schema - or else you'll be in another messy rework later on down the road. 3 schemas does make sense if you have specific security needs, but otherwise, just make sure you're getting the bang for the buck that you want. Now would be a great time to go to one schema.
You could give Redgate SQL Compare and Data Compare a shot. They have a schema mapping feature that should let you map the dbo schema to the sales schema in another and then move the tables and procs. It would make it so you don't have to mess with the SQL export wizard. You still would have to refactor your other objects though.
I love these two tools.
edit:
I think you can get a fully functional demo too.
edit:
Additionally, they offer SQL Refactor, which does a 'smart' rename. Score!
Could you have a dummy database called SALES that has a VIEW called [Orders]:
CREATE VIEW Sales.dbo.Orders
AS
SELECT OrderId, OrderDate, ...
FROM CombinedDatabase.Sales.Orders
and then
SELECT ... FROM Sales.dbo.Order
will still work.
You won't be able to INSERT / UPDATE that table without some further jiggery-pokery though.
If you could have such VIEWs log that they were used that would enable you to fix the code that called them!! but I can't think of a way to do that; however you could disable each in turn, run some tests, fix whatever is broken, then move on to next one ... and thus eradicate them by refactoring, but have a largely working application during the process.
I've used SED for this type of thing, but we have unique names for all our tables and all our columns, and we use variable names within our application that match the database column names - so I would have high confidence that changing xxx_yyy_ID to aaa_bbb_ID in our application would work well, and not have accidental side effects.
If you have actual column/table names like "Sales" and "Orders" I think that something like SED would be risky
Ok, so my basic understanding of your problem is something like this:
You have three different databases (i.e. Sales, Manu, Inventory)
They have distinct table & procedure names (no table/proc names in Sales exist in Manu or Inventory)
You want all the tables/procs from all three databases in a single database (i.e. SaleManInv)
Some stored procedures in each database explicitly refer to tables in the other databases (i.e. Sales.dbo.lookupItem() explicitly refers to Inventory.dbo.Items table)
Exporting and importing the tables doesn't seem like it will be a problem, what I would do for the procs:
Export one proc from the SQL Server 2000 db to the SQL Server 2005 DB to determine if you need to get rid of the ".dbo." portion of the cross references.
Export all the procs to text files (same folder for all procs)
Use a text editor with a "Search and Replace in Files" (I use PSPAD) and replace all the "Sales.dbo." with "SaleManInv.dbo.", then all the "Iventory.dbo." with "SameManInv.dbo." etc. to convert all the references to the new db.
Then run the exported and modified procs into your new db.
Is that making any sense? :-)
I was in a similar position where I had several SQL Server 2008 databases that were merged into 1. My solution was to use Integration Services' Transfer Server Objects task into a new target database. All data was copied over along with tables. Afterwards - in what was a very complex query, I scripted out all stored procedures/functions/views/etc. to a file and changed all cross-database references and re-created the stored procedures and other objects.
The trick with the stored procedures was to script them out in the order or syscontraints in order to ensure that stored procedures or functions that were referencing other stored procedures/functions internally were created last.
If there was a tool that I felt could have handled this task in an automated fashion, I would have purchased it immediately.
I would like to know if it's same kind of data. Any way. I would create a new column with the name 'SourceSystem'. So when the boss comes running after:
" - what was the sales diff between databasesystem1 and db2 in 2004".
Then you can answer that. Then in a year or two, if that questions don't pop up. You can delete that column. Merging data removes the origin of the data.

Resources