SQL Server 2008 R2 : Truncate Permissions - sql-server

Can SQL commands like truncate be restricted at the user level (for specific databases / servers)?
A member of my team truncated a production table thinking he was in his development database and I would like to prevent this from happening again (without completely locking down his permissions).

You might try adding a tinyint field (default 0) to the table you wish to protect, and adding a foreign key constraint pointing to a dedicated, single record table. That should protect your table.
That said, you should probably get rid of this guy.

As far as I remember revoking ALTER permission on the table should do the trick (in all versions 2005 through 2012).

A person needs DDL permission on a table to do a TRUNCATE, since it's an unlogged operation. Even if a person has datawriter (or insert/update/delete), but lacks ALTER or DDL_ADMIN, they won't be able to truncate the table.

Related

Searching for a way to copy table data from one SQL Server database to another (concerning Identity and Foreign Keys)

Initial question (solution comes afterwards):
I have the following challenge: I have an Oracle database where a software (Infor Supplier Exchange) once created tables and filled them with data. This db shall be migrated to SQL Server, then an upgrade of the Infor software shall be executed with the migrated data.
A colleague of mine already used a script by Microsoft to migrated the Oracle db to SQL Server which is now available for me. Even though the "Keep Identity" flag was set, no primary key in the new db has its Identiy (autoincrement) set - but that is needed by the Infor software to add data later.
I found a way via SSMS to change the Identity (as well as its seed) for each relevant db table: Right-click on the table, design, change the "Identity Specification" manually. But I have over 300 tables: The effort would cost hours (and sanity).
I also found out that I can use SSMS's "export data" task. You have to know that the Infor software provides a db installer which creates all necessary tables, keys, identity properties, etc. with an EMPTY database. So I can basically export the data from the "Oracle migrated old db" to the "Infor prepared new db" since they (should) have the same table names, keys etc. - except the Identity property and the user data.
In the export data task you can check "Enable identity insert". The problem is that this SSMS feature aborts when it processes a table with foreign keys where the referenced table does not exist, yet. So I could go through the old db again, execute the "copy data" task for all tables without primary keys first, then try the remaining tables until all data is copied to the new db. But this is again much effort since I have to go back on every error or check all contraints beforehand.
Do you have a better approach? Is it possible to copy data from db A (with 300+ tables) to db B (with the same table structure), hoping that a tool solves the correct order of tables because of their foreign key constraints?
If you have questions on the issue I can explain in more detail. Thanks in advance.
Solution:
I solved the task by disabling constraints and triggers temporarily. The steps are:
EXEC sp_MSForEachTable "ALTER TABLE ? NOCHECK CONSTRAINT all"
sp_msforeachtable "ALTER TABLE ? DISABLE TRIGGER all"
EXEC sp_MSForEachTable "DELETE FROM ?"
I had to clear the target database's tables since they are filled with some sample data by the Infor installer. The data export task can append rows or can try to remove existing rows (with same primary keys). But this uses TRUNCATE internally which doesn't work with foreign key contraints, even when they are disabled by the above command.
Next: Execute the SSMS database task "Export data". Ignore datatype conversion errors (some types differ from Oracle-Migration to target SQL schema, like varchar to nvarchar which I checked and judged as not critical).
exec sp_MSForEachTable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
sp_msforeachtable #command1="print '?'", #command2="ALTER TABLE ? ENABLE TRIGGER all"
Using the vendor's SQL Server database schema and loading the data yourself is typically the correct approach for migrating to SQL Server with packaged software. But there may be additional guidance available from the vendor.
Instead of trying to load the tables in an order that is compatible the foreign key constraints, which is not always even possible, disable all the foreign keys before loading the database and re-enable them after. See eg Temporarily disable all foreign key constraints

Bypass SQL Server 2008 R2 Express 10 go limit

The limit of 10 GB is reached and different constraints force to circumvent this limit the time that a set of patches can be put in place. An appropriate license is already in place on another server but unfortunately the migration can not be done in a reasonable time. To address the most pressing, we must find the way to override the limit imposed by SQL Server Express. Shrinkage, aliasing, file splitting, index changes, all of these were attempted without success. Suggestions?
Since the 10GB limit is per database you can use the following trick to split the data among several databases. Warning: people with strong DB beliefs please close your eyes now :-)
Move some tables to other database, choosing a set of tables than doesn't break foreign key constraints.
For each table create a view with the same name in the original database like this:
create view TableName as
select * from TheOtherDB..TableName
In this way you use the view as the table and you don't have to change a single query, SQL Server allows INSERT, UPDATEand DELETE on that type of views as if they were a table, but the data is stored in the other DB.
Of course after you migrate to the new server you should move the data back to one database.

Collation change on MS sql server 2012

Dear all, Currently I am just researching how I could handle the change of the collation on the database.
Somebody made an unusual decision to create accent sensitive database for global use... but I am on the way to handle this!
REASON: of changing the collation is that database contains data collected from different countries and as we all know some of cultures have their own letters.
With the respect for the customers, our organization would like to have Accent Insensitive database. That will allow users to request data from the server without any limitations using local characters.
As far as I have find out, there may be an option to drop constraints and etc. change collation and then just to bring everything back. In this case I am afraid if this would be enough to affect already existing data (columns).
Another way, I have found an article in Collation change on 2005 and 2008 server. However, this does not include the 2012 server.
Also I am taking the complexity of this example into consideration as well.
I believe that I am not in an easy phase. But I am hoping to get few advises what would be the best and safest way to handle this.
Thank you for your concerns and assistance.
UPDATE let me add what architecture do we have: The complete system contains 4 databases and more than 1.000 tables in total. So my expectations is that not all of the possible ways may work in an optimal way.
me too i had to deal with a similar issue because of a different reason: ancient databases with an old SQL collation installed ages ago on a SQL6.5 server that has been inplace upgraded for each version from sql 7 to sql 2005 and now should be updated to sql 2012.
why all these inplace upgrades? because the actual collation was the server collation and was so old that is not available during then install process of a recent version (2000+) of sql server...
i decided to drop all that old rubbish so i had to find a way that allowed me to move to a new installation with a windows collation.
i had to exclude the data migration (create a new database and import data) because of the lack of documentation and the huge number of customizations, triggers, hidden rules and so on.
the solution i used (the order matters):
disable automatic statistics generation
script the creation of all foreign keys and then drop them
script unique and primary indexes and then drop them
script all remaining indexes and then drop them
script custom statistics and then drop them
script CHECK and DEFAULT constraints and then drop them
now you can run the ALTER commands needed to change the collation of the columns and change the collation of the database itself.
when done repeat the above in reverse order to rebuild all the needed objects.
it happens that if the database is so old as is mine you may incur in something funny like existing foreign key that references fields with different datatypes.
Changing collation of all existing columns is a real pain. I suggest a side-by-side migration rather than alter each column individually. Create a new database with the desired collation containing only empty tables. Copy data from the old db to the new one using INSERT...SELECT (or the ETL tool of your choice), and then create constraints, indexes, and other database objects.
Consider upvoting the Make it easy to change collation on a database SQL Server feature request.
There are a number of complicated solutions on the internet for inplace collation changes but the simplest (and safest) way we have found is to script out the database, alter the script to create a new db with the collation set at the start and then import the data to the new database.
We achieve this using MS SQL Server 2012 Management Studio in the following way:
Script out all database objects with Tasks -> Generate Scripts -> Script entire Database and all Database objects
Alter the script with the following 2 changes and then run it to create a new database:
a) Change DB name to MY-NEW-DB
b) Under the CREATE DATABASE statement add: ALTER DATABASE [MY-NEW-DB] collate Latin1_General_CS_AS
If desired, use a tool like RG SQL Compare to compare the old and new database to verify all indexes, constraints, types etc were the same and collation on relevant columns only was changed.
Run Tasks->Import Data ensuring 'Enable Identity Insert' checked. All data transferred to the new case sensitive database correctly.
Run DBCC CHECKDB if you wish to check consistency

Checking what 'DROP' queries were ran on SQL Server

Some one keeps dropping tables on one of our database's as soon as I gain access the server. I don't know who this some one is. I have nearly lost my job once because of this person.
So I was wondering is there a way I can check which user ran a query for DROP TABLE my_table so that I can prove to my boss I am innocent?
I found this article which may help you.
On SQL Server 2005 or newer, you could also investigate DDL triggers which would even allow you to prohibit certain DROP TABLE statements....
CREATE TRIGGER safety
ON DATABASE
FOR DROP_TABLE
AS
PRINT 'You must disable Trigger "safety" to drop tables!'
ROLLBACK
;
This would basically just prevent anyone from dropping a table

What built in mechanism does SQL Server have to do Flashback Queries?

Think that says it all?
None. SQL Server does not have an equivalent feature.
UPDATE: From SQL Server 2016 on, this information is outdated. See the comments and answers below.
I know this question is quite old, but with SQL Server 2016, Temporal Tables is a feature:
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables
Maybe it can help others in case they come to this topic (Searching for something similar to Oracle Flashback feature)
With temporal tables enabled, you can query table AS OF a specific timestamp and retrieve rows as they were in that specific timestamp, just like you were used to do in Oracle:
(SELECT * FROM EMPLOYEE AS OF TIMESTAMP ('13-SEP-04 8:50:58','DD-MON-YY HH24: MI: SS')
Equivalent query in SQL Server for a table with SYSTEM_VERSIONING=ON will be:
SELECT * FROM EMPLOYEE FOR SYSTEM_TIME AS OF '2004-09-01 08:50:58'
To enable SYSTEM_VERSIONING for an existing table with rows you may use the following script:
ALTER TABLE [dbo].[TABLE] ADD [SysStartTime] datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN NOT NULL CONSTRAINT DF_Inventory_SysStartTime DEFAULT '1900-01-01 00:00:00', [SysEndTime] datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN NOT NULL CONSTRAINT DF_Inventory_SysEndTime DEFAULT '9999-12-31 23:59:59', PERIOD FOR SYSTEM_TIME ([SysStartTime], [SysEndTime])
ALTER TABLE [dbo].[TABLE] SET (SYSTEM_VERSIONING = ON);
After enabling SYSTEM_VERSIONING, the History table will show under the table where you enabled versioning:
To Remove SYSTEM_VERSIONING from a table:
ALTER TABLE [dbo].[TABLE] SET (SYSTEM_VERSIONING = OFF);
ALTER TABLE [dbo].[TABLE] DROP PERIOD FOR SYSTEM_TIME;
ALTER TABLE [dbo].[TABLE] DROP COLUMN [SysStartTime], [SysEndTime];
For more info you can visit the following link (or official Microsoft documentation referenced before):
http://www.sqlservercentral.com/articles/SQL+Server+2016/147087/
Closest equivalent is probably Database Snapshots. You can create a database snapshot at the moment of interest and then report against the snapshot. Unlike flashbacks, the moments at which the SQL Server snapshots are taken has to be pre-determined.
On SQL server 2008 you can use Change Data Capture, by this feature you can do a lot more than oracle flash back. (There is a store procedure to revert database on SQL server 2008 if you want i can provide that for you)
Yes, we can use Change Data Capture and Change Tracking
features which are Built in mechanisms in SQL Server and very much
similar to Flashback in Oracle.
When you apply Change Data Capture features on a database table, a mirror of the tracked table is created with the same column structure of the original table, but with additional columns that include the metadata used to summarize the nature of the change in the database table row. The SQL Server DBA can then easily monitor the activity for the logged table using these new audit tables .
Change tracking is a lightweight solution that provides an efficient change tracking mechanism for applications. Typically, to enable applications to query for changes to data in a database and access information that is related to the changes, application developers had to implement custom change tracking mechanisms. Creating these mechanisms usually involved a lot of work and frequently involved using a combination of triggers, timestamp columns, new tables to store tracking information, and custom cleanup processes.
Different types of applications have different requirements for how much information they need about the changes. Applications can use change tracking to answer the following questions about the changes that have been made to a user table:
What rows have changed for a user table?
Only the fact that a row has changed is required, not how many times the row has changed or the values of any intermediate changes.
The latest data can be obtained directly from the table that is being tracked.
Has a row changed?
The fact that a row has changed and information about the change must be available and recorded at the time that the change was made in the same transaction.
For more information on how to use Change Data Capture (CDC) and Change Tracking in SQL Server; please check out Pinal Dave's Post.
Change Tracking
SQL Server 2016 introduced temporal tables aka history tables which enables developers to query data stored in a database table in the past.
I mean developers can provide applications which enables users to display a table's historical data, or the view of a table at a certain time in the past

Resources