I am currently trying to script a job on SQL Server 2005 that will automate the DBCC CHECKDB process. Basically, I am using a cursor to run through and run DBCC CHECKDB on every database on an instance. Sometimes it works, running through every database and logging the errors in a table I have designed for that purpose and sometimes it only runs through a few of the databases and stop. Does anyone have any idea what is going on? I have included the code that I use for the cursor.
DECLARE #DbName varchar(100)
DECLARE
GetDbName CURSOR
LOCAL
FORWARD_ONLY
OPTIMISTIC
FOR
SELECT
name
FROM
sys.databases
ORDER BY
name
OPEN GetDbName
FETCH NEXT FROM GetDbName
INTO #DbName
WHILE (##fetch_status = 0)
BEGIN
print #DbName
INSERT INTO
TempLog
EXEC('DBCC CHECKDB ('+ #DbName +') WITH NO_INFOMSGS, TABLERESULTS')
FETCH NEXT FROM GetDbName
INTO #DbName
END
CLOSE GetDbName
DEALLOCATE GetDbName
Errors of a sufficiently high (20+) severity -- in my experience these are most often corruptions such as invalid text pointers -- will stop whatever SQL job is currently running with extreme prejudice, ignoring try/catch constructs and killing the connection. I would suggest moving this task out to an external process with a new connection for each DBCC CHECKDB command so that it can continue in the face of high-severity errors.
use the undocumented sp_MSforeachdb stored proc
exec sp_msforeachdb 'use ?; dbcc checkdb'
Related
Currently I have few database no longer active (No new entry), however my company does not want to drop it as they want keep it for reference purpose.
But due to huge database size I have difficult time to backup and restore. (Company policy, time to time have to backup and restore to test the database is workable or not).
So I'm thinking, I want to shrink the transaction log to reduce the time needed for the process, is that advisable to do that? Emphasize, since no new entry.
FYI, the transaction file (ldf) is much bigger than actual database size (mdf).
No matter is advisable to do or not, or even have better way to do please let me know.
I use this script
DECLARE #DBName varchar(255)
DECLARE #LogName varchar(255)
DECLARE #DATABASES_Fetch int
DECLARE DATABASES_CURSOR CURSOR FOR
select distinct
name, db_name(s_mf.database_id) dbName
from
sys.master_files s_mf
where
s_mf.state = 0 and -- ONLINE
has_dbaccess(db_name(s_mf.database_id)) = 1 -- Only look at databases to which we have access
and db_name(s_mf.database_id) not in ('master','tempdb','model','msdb','distribution')
and db_name(s_mf.database_id) not like 'MSDB%'
and db_name(s_mf.database_id) not like 'Report%'
and type=1
order by
db_name(s_mf.database_id)
OPEN DATABASES_CURSOR
FETCH NEXT FROM DATABASES_CURSOR INTO #LogName, #DBName
WHILE ##FETCH_STATUS = 0
BEGIN
exec ('USE [' + #DBName + '] ; DBCC SHRINKFILE (N''' + #LogName + ''' , 0, TRUNCATEONLY)')
FETCH NEXT FROM DATABASES_CURSOR INTO #LogName, #DBName
END
CLOSE DATABASES_CURSOR
DEALLOCATE DATABASES_CURSOR
Yes, this is a good idea. The easiest way to do this is first change the Recovery Model to Simple like this:
Then you can use Tasks; Shrink; Files to reduce the transaction log space to zero. There are many resources to show you how to shrink a file, but this article explains it nicely:
How to shrink the transaction log
Shrinking a file is not normally recommended, but it advisable if you do not expect more transactions. This will save you time when you backup and it will take up less disk space.
Every night, I backup my production server and push the backup to my dev server. My dev server then has a job that runs which first checks if the backup file exits, if so check if the database exists in dev and if so drop the database, then restore from file. This all works fine, unless the file is not yet complete due to slow transfer, etc. If the file is not completely downloaded when the job runs then the first step sees it exists and drops the database. The next step tries to restore and of course fails. The next day when the job runs I would expect that when it checks if the database exists, it would see that it does not and shouldn't attempt to drop it and just restore. However, what's happening is the job is unable to drop the database and just fails at that point. This requires manual intervention to get the database restored, which is my problem. I'm less concerned with the fact of having no database existing on the server for a day (in theory) as I can tweak the schedule further to restore sooner. What I am concerned with is why is the IF statement not working to check if the database exists and attempts to drop a database regardless? Here's the T-SQL code that I am using:
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE ('[' + name + ']' = '[PROD-01]')))
BEGIN
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END
I'm not sure how this is happening, as I am unable to reproduce, but a TRY / CATCH block is an ideal solution for this case:
SET XACT_ABORT ON
GO
DECLARE #output INT
DECLARE #SqlPath varchar(500) = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\PROD-01_prod_backup.bak'
EXECUTE master.dbo.xp_fileexist #SqlPath, #output OUT
IF #output = 1
BEGIN
IF (EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE (name = 'PROD-01')))
BEGIN TRY
ALTER DATABASE [PROD-01] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [PROD-01]
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE();
END CATCH
RESTORE DATABASE [PROD-01] FROM DISK = #SqlPath
END
With the following script, I've tried to recompile all our stored procedures on the database because we're upgrading to SQL Server 2012 (from 2005) and I want to make sure beforehand, that they don't break.
DECLARE #schema VARCHAR(20),
#spName VARCHAR(MAX),
#fullName VARCHAR(MAX)
DECLARE storedProcedureCursor CURSOR FOR
SELECT SPECIFIC_SCHEMA, SPECIFIC_NAME
FROM ASZ_GWZ.information_schema.routines
WHERE routine_type = 'PROCEDURE'
OPEN storedProcedureCursor
FETCH NEXT FROM storedProcedureCursor INTO #schema, #spName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #fullName = #schema + '.' + #spName
EXEC sp_recompile #objname = #fullName
FETCH NEXT FROM storedProcedureCursor INTO #schema, #spName
END
CLOSE storedProcedureCursor
DEALLOCATE storedProcedureCursor
Unfortunately, the result was a little bit different from what I expected, since the stored procedures were only marked for recompilation and didn't actually get recompiled.
Object 'dbo.xyz' was successfully marked for recompilation.
How can I really recompile all (or all now for recompilation marked) stored procedures?
Thanks in advance
You have to run a stored procedure for it to recompile. To compile a stored proc basically means that the execution plan for the stored proc gets cached. Whether you use sp_recompile or create a stored proc WITH RECOMPILE, you're just telling SQL that the execution plan needs to be re-cached. That sp_recompile only removes the existing execution plan.
The sp_recompile system stored procedure forces a recompile of a
stored procedure the next time that it is run. It does this by
deleting the existing plan from the procedure cache forcing a new plan
to be created the next time that the procedure is run.
More Info: https://msdn.microsoft.com/en-us/library/ms190439(v=sql.110).aspx
I would like to know how I can switch from one database to another within the same script. I have a script that reads the header information from a SQL Server .BAK file and loads the information into a test database. Once the information is in the temp table (Test database) I run the following script to get the database name.
This part works fine.
INSERT INTO #HeaderInfo EXEC('RESTORE HEADERONLY
FROM DISK = N''I:\TEST\database.bak''
WITH NOUNLOAD')
DECLARE #databasename varchar(128);
SET #databasename = (SELECT DatabaseName FROM #HeaderInfo);
The problem is when I try to run the following script nothing happens. The new database is never selected and the script is still on the test database.
EXEC ('USE '+ #databasename)
The goal is switch to the new database (USE NewDatabase) so that the other part of my script (DBCC CHECKDB) can run. This script checks the integrity of the database and saves the results to a temp table.
What am I doing wrong?
You can't expect a use statement to work in this fashion using dynamic SQL. Dynamic SQL is run in its own context, so as soon as it has executed, you're back to your original context. This means that you'd have to include your SQL statements in the same dynamic SQL execution, such as:
declare #db sysname = 'tempdb';
exec ('use ' + #db + '; dbcc checkdb;')
You can alternatively use fully qualified names for your DB objects and specify the database name in your dbcc command, even with a variable, as in:
declare #db sysname = 'tempdb';
dbcc checkdb (#db);
You can't do this because Exec scope is limited to dynamic query. When exec ends context is returned to original state. But context changes in Exec itself. So you should do your thing in one big dynamic statement like:
DECLARE #str NVARCHAR(MAX)
SET #str = 'select * from table1
USE DatabaseName
select * from table2'
EXEC (#str)
Hi I am using this script in the process of weekly maintenance, suggest best approach/scripts to do shrinklog. Currently am getting an error with the below script
declare #s nvarchar(4000)
set #s= '
if ''?'' not in (''tempdb'',''master'',''model'',''msdb'')
begin
use [?]
Alter database [?] SET Recovery simple
end '
exec sp_msforeachdb #s
set #s= '
if ''?'' not in (''tempdb'',''master'',''model'',''msdb'')
begin
use [?]
Declare #LogFileLogicalName sysname
select #LogFileLogicalName=Name from sys.database_files where Type=1
DBCC Shrinkfile(#LogFileLogicalName,1)
end'
exec sp_msforeachdb #s
Error Description:
ShrinkLog Execute SQL Task Description: Executing the query "declare #s nvarchar(4000) set #s= ' ..." failed with the following error: "Option 'RECOVERY' cannot be set in database 'tempdb'. Cannot shrink log file 2 (DBServices_Log) because total number of logical log files cannot be fewer than 2. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
note: I am avoiding tempdb(all System db) in my script, but error message shows tempdb?
This is probably the worst maintenance script I've seen in the past year. A maintenance script that every week breaks the log chain and makes the database unrecoverable?? OMG. Not to mention that the simple premise of shrinking the log on a maintenance task is wrong. If the database log has grown to a certain size, than that size is needed. Schedule log backups more frequently to prevent this, but don't schedule log shrink operations.
You can avoid the 5058 error by using an exec statement to alter the database recovery model. The following script will set each user database to a simple recovery model. (It uses Jimmy's answer to detect system databases.)
exec sp_msforeachdb 'if exists (select name from sys.databases d where case when d.name in (''master'',''model'',''msdb'',''tempdb'') then 1 else d.is_distributor end = 0
and name = ''?'' and recovery_model_desc != ''SIMPLE'')
begin
declare #previousRecoveryModel varchar(100) = (select recovery_model_desc from sys.databases where name = ''?'');
print ''Changing recovery model for ? from '' + #previousRecoveryModel + '' to SIMPLE.'';
use master;
-- Using exec avoids a compile-time 5058 error about tempdb, which is a branch that will never be executed in this code.
exec (''alter database [?] set recovery simple'');
end';
Then you won't get the compile-time error about tempdb because the exec statement will compile its statement with the database name variable already substituted.