How to delete all tables from db? Cannot delete from sys.tables - sql-server

How can I perform this query on whatever way:
delete from sys.tables where is_ms_shipped = 0
What happened is, I executed a very large query and I forgot to put USE directive on top of it, now I got a zillion tables on my master db, and don't want to delete them one by one.
UPDATE: It's a brand new database, so I don't have to care about any previous data, the final result I want to achieve is to reset the master db to factory shipping.

If this is a one-time issue, use SQL Server Management Studio to delete the tables.
If you must run a script very, very carefully use this:
EXEC sp_msforeachtable 'DROP TABLE ?'

One method I've used in the past which is pretty simple and relatively foolproof is to query the system tables / info schema (depending on exact requirements) and have it output the list of commands I want to execute as the results set. Review that, copy & paste, run - quick & easy for a one-time job and because you're still manually hitting the button on the destructive bit, it's (IMHO) harder to trash stuff by mistake.
For example:
select 'drop table ' + name + ';', * from sys.tables where is_ms_shipped = 0

No backups? :-)
One approach may be to create a Database Project in Visual Studio with an initial Database Import. Then delete the tables and synchronize the project back to the database. You can do the deletes en masse with this approach while being "buffered" with a commit phase and UI.
I am fairly certain the above approach can be used to take care of the table relationships as well (although I have not tried in the "master" space). I would also recommend using a VS DB project (or other database management tool that allows schema comparing and synchronization) to make life easier in the future as well as allowing version-able (e.g. with SCM) schema change-tracking.
Oh, and whatever is done, please create a backup first. If nothing else, it is good training :-)

Simplest and shortest way I did was this:
How to Rebuild System Databases in SQL Server 2008
The problem with all other answers here is that it doesn't work, since there are related tables and it refuses to execute.
This one, not only it works but actually is what I am looking for: "Reset to factory defaults" as stated in the question.
Also this one will delete everything, not only tables.

This code could be better but I was trying to be cautious as I wrote it. I think it is easy to follow an easy to tweak for testing before you commit to deleting your tables.
DECLARE
#Prefix VARCHAR(50),
#TableName NVARCHAR(255),
#SQLToFire NVARCHAR(350)
SET #Prefix = 'upgrade_%'
WHILE EXISTS(
SELECT
name
FROM
sys.tables
WHERE
name like #Prefix
)
BEGIN
SELECT
TOP 1 --This query only iterates if you are dropping tables
#TableName = name
FROM
sys.tables
WHERE
name like #Prefix
SET #SQLToFire = 'DROP TABLE ' + #TableName
EXEC sp_executesql #SQLToFire;
END

I did something really similar, and what I wound up doing was using the Tasks--> script database to only script drops for all the database objects of the originally intended database. Meaning the database I was supposed to run the giant script on, which I did run it on. Be sure to include IF Exists in the advanced options, then run that script against the master and BAM, deletes everything that exists in the original target database that also exists in the master, leaving the differences, which should be the original master items.

Not very elegant but as this is a one time task.
WHILE EXISTS(SELECT * FROM sys.tables where is_ms_shipped = 0)
EXEC sp_MSforeachtable 'DROP TABLE ?'
Works fine on this simple test (clearing a on the second loop after failing on the first attempt and proceeding onwards to delete b)
create table a
(
a int primary key
)
go
create table b
(
a int references a (a)
)
insert into a values (1)
insert into b values (1)

Related

Exec tSQLt.Faketable crashing our original table constraint and data

During debugging the tsqlt code, I have directly run the below statement without wrapping it as sp and my original table constraint got deleted and some data missing from the original table.
Exec tSQLt.FakeTable #TableName = N'DBO.Employee', #Identity=1;
Exec tSQLt.FakeTable #TableName = N'DBO.Salary', #Identity=1;
How do I prevent running faketable statement in tsqlt is impacting the original table?
There is no way to prevent executing tSQLt.FakeTable outside of the framework. There are also good reasons to not prevent that, so I do not think that adding that functionality is the right approach.
However, if you’re using the newest version of tSQLt, you can use tSQLt.UndoTestDoubles to get the original object(s) back.
Ugh, been there... You can't prevent it, short of contributing to the project and putting a pull request in to add the functionality.
FakeTable creates a backup of your original table so you should be able to get the original table back. These backup table names start with tSQLt.tempobject and end in an identifier. You can delete the new "fake" table (which now has the name of your original table) and rename the tempobject table if/when you find it.
Something I've done in the past is to query for a column that I know is in the table to find the name of the tSQLt table:
SELECT t.name
FROM sys.columns c
INNER JOIN sys.tables t ON t.object_id = c.object_id
WHERE c.name = 'SomeCol';

Need to identify if a T-SQL query I run is modifying any records without access to logs

I'm looking to modify a stored procedure that has a long chain of stored procedures within it. I'm not sure what parts of this proc will cause updates to live tables though. What I want to do is maintain all of the tempdbs it creates and select from them, but prevent any record changes via update, insert, delete, etc. Ideally I want to be able to see this info from directly inside SSMS without DBA level permissions. I'm running this on a test DB, so it would also be appropriate if something could tell me what tables were changed after the fact. I could then find the update, prevent it, roll back to a snapshot, then run it again until it shows 0 changes.
I've tried going through by hand and making the changes by searching for keywords like Update, Into, and Exec. However, this involves a lot of human judgment and adds a lot of room for human error. I've also considering wrapping this in a begin..rollback transaction to undo any unintended changes, but this proc can take upwards of 10 minutes to run and I don't want an open transaction that long. I'm also not entirely certain that there isn't a commit tran hiding in one of the stored procedures called by this one.
Any help provided would be greatly appreciated, thanks.
As long as the stored procedures don't have dynamic SQL and such, you could use the built in utilities to recursively find any referenced tables and stored procedures. This code will show referenced columns and the type of action. I have never used this on a large scale, so definitely spot check as you go.
MSDN Documentation
CREATE TABLE dbo.someData
(
id INT,
name VARCHAR(100)
)
GO
CREATE OR ALTER PROC dbo.doSomething
AS
SELECT name FROM dbo.someData
UPDATE d
SET d.id = 2
FROM dbo.someData d
GO
SELECT
--SP, View, or Function
ReferencingName = o.name,
ReferencingType = o.type_desc,
--Referenced Field
ref.referenced_database_name, --will be null if the DB is not explicitly called out
ref.referenced_schema_name, --will be null or blank if the DB is not explicitly called out
ref.referenced_entity_name,
ref.referenced_minor_name,
--these will tell you what it's doing
ref.is_updated,
ref.is_selected
FROM
sys.objects o
CROSS APPLY
sys.dm_sql_referenced_entities('dbo.' + o.name, 'Object') ref
WHERE
o.type IN ('P')
AND o.name LIKE '%something%'

Granularity of "Block incremental deployment if data loss might occur"

In SQL Server data Tools you have the deployment option "Block incremental deployment if data loss might occur", which I'd wager is a best practice to keep checked.
Lets say we have a table foo, and a column bar which is now redundant - has no dependencies, foreign keys etc etc, and we have already removed references to this column in our data layer and stored procedures as it's simply not used. In other words, we are satisfied that dropping this column will have no adverse effects.
There are a couple of flies in the ointment:
The column has data in it
The database is published to
hundreds of distributed clients, and it could take months for the
change to ripple out to all clients
As the column is populated, publishing will fail unless we change the "Block incremental deployment if data loss might occur" option. This option is at the database level, not table level however, and so due to the distributed nature of the clients, we'd have to turn off the "data loss" option for months before all databases were updated, and turn it back on once all clients have updated (our databases have version numbers set by our build).
You may think we could solve this with a pre-deployment script such as
if exists (select * from information_schema.columns where table_name = 'foo' and column_name = 'bar') BEGIN
alter table foo drop constraint DF_foo_bar
alter table foo drop column bar
END
But again this fails unless we turn the "data loss could occur" option off.
I'm simply interested as to what others have done in this scenario as I'd like to have granularity which doesn't currently seem possible.
So I've been accomplishing this task via the following steps:
1) Since we are going to make table #Foo, make sure to drop that table before moving forward if it exists.
2) In a pre-deployment script: If the column exists, create a temporary table #Foo and select all rows from Foo into #Foo.
3) Remove the column from #Foo
4) Delete all rows in Foo (now there will be no data loss since no data exists)
5) In a post-deployment script: If #Foo exists, select all rows from #Foo into Foo
6) Drop table #Foo
And code:
pre-deployment script
if(Object_ID('TempDB..#Foo') is not null)
begin
drop table #Foo
end
if exists (
select *
from sys.columns
where Name = 'Bar'
and Object_ID = Object_ID('Foo')
)
begin
select * into #Foo
from Foo
alter table #Foo drop column Bar
-- Now that we've made a complete backup of Foo, we can delete all its data
delete Foo
end
post-deployment script
if(Object_ID('TempDB..#Foo') is not null)
begin
insert into Foo
select * from #Foo
drop table #Foo
end
Caveat: Depending on your environment, it might be wiser to depend on versions rather than column & temp table existence in your conditionals
The PreDeployment script doesn't work the way you are hoping to use it because of the order of operations for SSDT:
Schema Comparison
Script generation for schema difference
Execute PreDeployment
Excecute generated script
Execute PostDeployment.
So of course, the schema difference is identified as part of #2 and appropriate SQL is generated to drop the column (including the check to block on data loss), before your manual pre-deployment script can 'get rid of it'.
If you take a look at the script generated behind the scenes to detect (and therefore block) on possible data loss, it checks to see if there are any rows by running something along the lines of this:
IF EXISTS (select top 1 1 from [dbo].[Table]) RAISERROR ('Rows were detected. The schema update is terminating because data loss might occur.', 16, 127)
This means the simple existence of rows will stop the column being dropped. We haven't found any way around this other than manually dealing with the problem outside (and before) the SSDT deployment, using conditional deployment steps based on version numbers.
You mention distributed clients, which implies you have some sort of automated publication/update mechanism. You also mention version numbers as part of the database - could you include in your deploy (before the sqlpackage.exe command I assume you are running) a manual SQL script? This is akin to what we do (ours is in Powershell, but you get the gist):
IF VersionNumber < 2.8
BEGIN
ALTER TABLE X DROP COLUMN Y
END
Disclaimer: in no way is that valid SQL, it's simply pseudo code to imply an idea!

Can I create a VIEW that has a dynamic name in another DB with MS SQL?

I found strange rules in MS SQL CREATE VIEW syntax.
It must be on the first line of query batch processing and it must be created in the current database.
I should make VIEWs that have dynamic name described by string variables
(type: VARCHAR or NVARCHAR). And those VIEWs should be created in other databases.
Because of the rule, CREATE VIEW statement must be on the first line of query batch processing, it cannot be after USE statement.
So, I tried to change databases with USE & GO statement. But GO statement seemed to make clear all the variables. Therefore they are not available that describe VIEW name after GO statement.
Do you have any opinon for me?
And if you know the reasons of CREATE VIEW syntax rules, please tell me.
Oh~, Sorry. I missed one thing. The names of databases are also dynamic.
And VIEWs, I want to make, not only should access tables of other databases
but also shoule be created in other databases.
Though I don't know OLAP well, I think this situation is involved OLAP.
You are able to dynamically create a sql-string and execute it.
DECLARE #ViewName VARCHAR(100)
SET #ViewName = 'MyView'
USE MyDB;
EXEC ( 'CREATE VIEW dbo.' + #ViewName + ' '
+ 'AS SELECT * FROM dbo.MyTable')
CREATE SYNONYM Resource1 FOR LinkedServer.Database.Schema.Table
GO
CREATE VIEW Resource1View
AS
SELECT *
FROM Resource1
GO
Now you can alter the synonym as much as you like and all your views referencing it will refer to the correct thing. If this doesn't solve the problem, then I would suggest that the way you're designing your system is not best. Please describe more what you are doing and why so we can advise you better.
As for "GO", is it actually not a SQL statement. It is never submitted to the server. The client sees the line with GO on it, and separates the submitted query into separate batches. A trace will prove this, as will EXEC 'SELECT 1' + CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10) + 'SELECT 2'.
If you're using OLAP as in Analysis Services, then I'm not experienced enough with that to help you, but I would think there'd be ways to choose the database to connect to just like in SSRS, and that queries don't have to live in the database but could live in the SSAS application.
I found it. It's the nested EXEC.

Help with sp_msforeachdb -like queries

Where I'm at we have a software package running on a mainframe system. The mainframe makes a nightly dump into sql server, such that each of our clients has it's own database in the server. There are a few other databases in the server instance as well, plus some older client dbs with no data.
We often need to run reports or check data across all clients. I would like to be able to run queries using sp_msforeachdb or something similar, but I'm not sure how I can go about filtering unwanted dbs from the list. Any thoughts on how this could work?
We're still on SQL Server 2000, but should be moving to 2005 in a few months.
Update:
I think I did a poor job asking this question, so I'm gonna clarify my goals and then post the solution I ended up using.
What I want to accomplish here is to make it easy for programmers working on queries for use in their programs to write the query using one client database, and then pretty much instantly run (test) code designed and built on one client's db on all 50 or so client dbs, with little to no modification.
With that in mind, here's my code as it currently sits in Management Studio (partially obfuscated):
use [master]
declare #sql varchar(3900)
set #sql = 'complicated sql command added here'
-----------------------------------
declare #cmd1 varchar(100)
declare #cmd2 varchar(4000)
declare #cmd3 varchar(100)
set #cmd1 = 'if ''?'' like ''commonprefix_%'' raiserror (''Starting ?'', 0, 1) with nowait'
set #cmd3 = 'if ''?'' like ''commonprefix_%'' print ''Finished ?'''
set #cmd2 =
replace('if ''?'' like ''commonprefix_%''
begin
use [?]
{0}
end', '{0}', #sql)
exec sp_msforeachdb #command1 = #cmd1, #command2 = #cmd2, #command3 = #cmd3
The nice thing about this is all you have to do is set the #sql variable to your query text. Very easy to turn into a stored procedure. It's dynamic sql, but again: it's only used for development (famous last words ;) ). The downside is that you still need to escape single quotes used in the query and much of the time you'll end up putting an extra ''?'' As ClientDB column in the select list, but otherwise it works well enough.
Unless I get another really good idea today I want to turn this into a stored procedure and also put together a version as a table-valued function using a temp table to put all the results in one resultset (for select queries only).
Just wrap the statement you want to execute in an IF NOT IN:
EXEC sp_msforeachdb "
IF '?' NOT IN ('DBs','to','exclude') BEGIN
EXEC sp_whatever_you_want_to
END
"
Each of our database servers contains a "DBA" database that contains tables full of meta-data like this.
A "databases" table would keep a list of all databases on the server, and you could put flag columns to indicate database status (live, archive, system, etc).
Then the first thing your SCRIPT does is to go to your DBA database to get the list of all databases it should be running against.
We even have a nightly maintenance script that makes sure all databases physically on the server are also entered into our "DBA.databases" table, and alerts us if they are not. (Because adding a row to this table should be a manual process)
How about taking the definition of sp_msforeachdb, and tweaking it to fit your purpose? To get the definition you can run this (hit ctrl-T first to put the results pane into Text mode):
sp_helptext sp_msforeachdb
Obviously you would want to create your own version of this sproc rather than overwriting the original ;o)
Doing this type of thing is pretty simple in 2005 SSIS packages. Perhaps you could get an instance set up on a server somewhere.
We have multiple servers set up, so we have a table that denotes what servers will be surveyed. We then pull back, among other things, a list of all databases. This is used for backup scripts.
You could maintain this list of databases and add a few fields for your own purposes. You could have another package or step, depending on how you decide which databases to report on and if it could be done programmatically.
You can get code here for free: http://www.sqlmag.com/Articles/ArticleID/97840/97840.html?Ad=1
We based our system on this code.

Resources