FATAL: the database system is in recovery mode - postgresql-13

0
I am getting the below errors every 2 or 3 days(when high usage of database) and also postmaster.pid file is changed automatically. my Database version is postgresql 13
Could not get JDBC Connection; nested exception is org.postgresql.util.PSQLException: FATAL: the database system is in recovery mode
Out of memory: Kill process 2591 (postgres)
LOG: terminating any other active server processes
FATAL: the database system is in recovery mode
below are the my configuration parameters and my server capacity is 4cpu's and 16 GB RAM, please help me.
ALTER SYSTEM SET max_connections = '400';
alter system set effective_io_concurrency=100
alter system set log_min_duration_statement=3000;
alter system set log_destination='stderr,csvlog'
alter system set effective_cache_size = '8GB';
alter system set max_connections = '400';
alter system set work_mem = '6MB'; - alter system set max_wal_size ='2GB';
alter system set max_worker_processes ='4';
alter system set max_parallel_workers ='4';
alter system set max_parallel_workers_per_gather ='4'; -
alter system set wal_keep_size ='500MB';
it is big issue for my application? and which parameter I need to change for overcome this issue.

Related

Cannot enable Query Store on SQL Azure database

One of our Azure SQL databases ran out of space recently which I believe resulted in Query Store switching over to "READ_ONLY".
I increased the size of the database however this has not resulted in the status changing even though running this query:
SELECT desired_state_desc, actual_state_desc, readonly_reason, current_storage_size_mb, max_storage_size_mb
FROM sys.database_query_store_options
Suggests that there is enough space available:
desired_state_desc actual_state_desc readonly_reason current_storage_size_mb max_storage_size_mb
READ_WRITE READ_ONLY 524288 522 1024
I tried to alter the Query Store status to Read_Write by running this statement (as database server admin user):
ALTER DATABASE [QueryStoreDB]
SET QUERY_STORE (OPERATION_MODE = READ_WRITE)
However, the statement failed with the following error:
User does not have permission to alter database 'QueryStoreDB', the database does not exist, or the database is not in a state that allows access checks.
Has anybody manged to switch SQL Azure Query Store to READ-WRITE so performance statistics start being collected again?
First, let’s try to clear the query store:
ALTER DATABASE [QueryStoreDB]
SET QUERY_STORE CLEAR;
GO
If that did not work, let’s run a consistency check.
ALTER DATABASE [DatabaseOne] SET QUERY_STORE = OFF;
GO
sp_query_store_consistency_check
GO
ALTER DATABASE [DatabaseOne] SET QUERY_STORE = ON;
GO
Try more options to troubleshoot this issue on this following article:

ORACLE Database | remove tablespace with missing datafile

I mistakenly remove the datafiles before I drop the tablespace. But the tablespace occupy a large size space. I need to remove it, any method?
It occur:
DROP TABLESPACE abc;
*
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/data/oradata/oracle/abc.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
If your datafile is held inside a PDB.
You will have to follow the next commands :
SHUTDOWN ABORT
STARTUP
ALTER PLUGGABLE DATABASE $MyPDB OPEN;
On this last command it should fail with an ORA-01110 error.
And if you try ALTER DATABASE DATAFILE $datafileNumber OFFLINE DROP;
You will encounter an ORA-01516.
This is because you are trying to DROP a datafile on the CDB instead of the PDB.
To do this properly, you have to modify the session to target PDB :
ALTER SESSION SET CONTAINER=$MyPDB;
Now you can drop the datafile and open the database :
ALTER DATABASE DATAFILE $datafileNumber OFFLINE DROP;
ALTER PLUGGABLE DATABASE $MyPDB OPEN;
References
https://blogs.oracle.com/robertgfreeman/pdb-recovery-your-pdb-wont-open-because-a-datafile-is-missing
You can follow the steps given here in this Oracle forum:
Follow the below steps : -
1) Shutdown abort
2) sqlplus sys/xxx as sysdba
3) Alter database mount
4) alter database datafile '' offline drop;
5) Alter database open
try to recover the datafile , identefy the name of the tablespace
select tablespace_name from dba_data_files where file_id = 8;
change the status of the tablespace to offline , so you can run the RMAN(recovery manager)
alter tablespace test offline immediate;
after that you have to run the RMAN to recovery the file.. for more info how to do that check this read this more about RMAN Burleson

How to manage environment-specific values in build process?

I want to automate the build process of server instance that I maintain. In version control I have a script containing every single command and configuration I used to build the instance in production.
Now I want to write a master build script that applies all these these scripts to a target instance.
While I try to keep my development environment as production-like as possible, there are some values that will always be different. To handle this, the build script should accept environment-specific values and pass the values to the relevant build steps.
The server instance has one user database. In production, the user database files are created on a drive that does not exist in my development environment, and files are larger than I have free space for in development.
When I set up the instance in production, I used this script. This is what I currently have in version control:
USE [master]
GO
CREATE DATABASE [QuoteProcessor] ON PRIMARY (
NAME = N'System_Data',
FILENAME = N'G:\SQLData\QuoteProcessor\System_Data.mdf',
SIZE = 500 MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
),
FILEGROUP [DATA] DEFAULT (
NAME = N'QuoteProcessor_Data',
FILENAME = N'G:\SQLData\QuoteProcessor\QuoteProcessor_Data.ndf',
SIZE = 600 GB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
)
LOG ON (
NAME = N'QuoteProcessor_Log',
FILENAME = N'G:\SQLLogs\QuoteProcessor\QuoteProcessor_Log.ldf',
SIZE = 100 GB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
);
ALTER DATABASE [QuoteProcessor] SET COMPATIBILITY_LEVEL = 100
GO
ALTER DATABASE [QuoteProcessor] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [QuoteProcessor] SET ANSI_NULLS OFF
GO
ALTER DATABASE [QuoteProcessor] SET ANSI_PADDING OFF
GO
ALTER DATABASE [QuoteProcessor] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [QuoteProcessor] SET ARITHABORT OFF
GO
ALTER DATABASE [QuoteProcessor] SET AUTO_CLOSE OFF
GO
ALTER DATABASE [QuoteProcessor] SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE [QuoteProcessor] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [QuoteProcessor] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [QuoteProcessor] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [QuoteProcessor] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [QuoteProcessor] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [QuoteProcessor] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [QuoteProcessor] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [QuoteProcessor] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [QuoteProcessor] SET DISABLE_BROKER
GO
ALTER DATABASE [QuoteProcessor] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [QuoteProcessor] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [QuoteProcessor] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [QuoteProcessor] SET ALLOW_SNAPSHOT_ISOLATION OFF
GO
ALTER DATABASE [QuoteProcessor] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [QuoteProcessor] SET READ_COMMITTED_SNAPSHOT ON
GO
ALTER DATABASE [QuoteProcessor] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [QuoteProcessor] SET RECOVERY SIMPLE
GO
ALTER DATABASE [QuoteProcessor] SET MULTI_USER
GO
ALTER DATABASE [QuoteProcessor] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [QuoteProcessor] SET DB_CHAINING OFF
GO
USE [master]
GO
ALTER DATABASE [QuoteProcessor] SET READ_WRITE
GO
In the development environment, I can use the same filegroups, but I have to use different paths and different sizes for the database files.
I see several solutions:
Edit the script by hand for every environment. I can't really automate this, or use it to track changes to environment-specific values.
Make one copy of the script for each environment. I could automate the selection of script depending on environment. This would duplicate the specification of things that should never change independently, like all the ALTER DATABASE statements.
Abstract away environment-specific values using scripting variables and define those values in another place, like an environment configuration file.
I think option 3 is the cleanest solution. It's the one I explore here.
For example, I could use sqlcmd scripting variables to replace the CREATE DATABASE statement with this:
CREATE DATABASE [QuoteProcessor] ON PRIMARY (
NAME = N'System_Data',
FILENAME = N'$(PrimaryDataFileFullPath)',
SIZE = $(PrimaryDataFileSize),
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
),
FILEGROUP [DATA] DEFAULT (
NAME = N'QuoteProcessor_Data',
FILENAME = N'$(UserDataFileFullPath)',
SIZE = $(UserDataFileSize),
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
)
LOG ON (
NAME = N'QuoteProcessor_Log',
FILENAME = N'$(LogFileFullPath)',
SIZE = $(LogFileSize),
MAXSIZE = UNLIMITED,
FILEGROWTH = 10%
);
And to create the database in production, I could invoke the script like this:
sqlcmd -i QuoteProcessor.sql -v PrimaryDataFileFullPath="G:\SQLData\QuoteProcessor\System_Data.mdf" -v PrimaryDataFileSize="500 MB" -v UserDataFileFullPath="G:\SQLData\QuoteProcessor\QuoteProcessor_Data.ndf" -v UserDataFileSize="600 GB" -v LogFileFullPath="G:\SQLLogs\QuoteProcessor\QuoteProcessor_Log.ldf" -v LogFileSize="100 GB"
The master build script would read the values from a configuration file and pass them to sqlcmd.
There would be one configuration file for production, one for development; one for every distinct environment in my organization.
I haven't decided how to stored the environment-specific values yet, but I was thinking that an INI file or an XML file would make it easy.
Can anyone else offer advice on solving a similar problem? I'm not sure if this is the best way to do what I want. Is there an easier or better-supported way of managing environment-specific values for this problem? Should I be using some tool that manages this kind of thing for me?
This is just my take on these
1. Edit the script by hand for every environment. I can't really
automate this, or use it to track changes to environment-specific
values.
I would recommend against this. This allows people to accidently make changes to code that you didn't intend for them to touch. Not that the others prevent it, but this welcomes the most risk.
2. Make one copy of the script for each environment. I could automate the selection of script depending on environment. This would duplicate the specification of things that should never change independently, like all the ALTER DATABASE statements.
This works, but you run into the problem when servers change, and based on your criteria on how you are determining what is a dev server or a prod server, the script maybe out dated.
3. Abstract away environment-specific values using scripting variables and define those values in another place, like an environment configuration file.
This is how SSDT, microsoft sql server data tools projects do it.
There's also a hybrid approach where you can abstract away the environmental specific values but not have an environement configuration file, by using template parameters (again in sql server at least)
http://msdn.microsoft.com/en-us/library/hh230912.aspx

Error restoring database backup

I am getting an error using SQL Server 2012 when restoring a backup made with a previous version (SQL Server 2008). I actually have several backup files of the same database (taken at different times in the past). The newest ones are restored without any problems; however, one of them gives the following error:
System.Data.SqlClient.SqlError: Directory lookup for the file
"C:\PROGRAM FILES\MICROSOFT SQL
SERVER\MSSQL.1\MSSQL\DATA\MYDB_ABC.MDF" failed with the operating
system error 3(The system cannot find the path specified.).
(Microsoft.SqlServer.SmoExtended)
This is a x64 machine, and my database file(s) are in this location: c:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL.
I do not understand why it tries to restore on MSSQL.1 and not MSSQL11.MSSQLSERVER.
Sounds like the backup was taken on a machine whose paths do not match yours. Try performing the backup using T-SQL instead of the UI. Also make sure that the paths you're specifying actually exist and that there isn't already a copy of these mdf/ldf files in there.
RESTORE DATABASE MYDB_ABC FROM DISK = 'C:\path\file.bak'
WITH MOVE 'mydb' TO 'c:\valid_data_path\MYDB_ABC.mdf',
MOVE 'mydb_log' TO 'c:\valid_log_path\MYDB_ABC.ldf';
When restoring, under Files, check 'Relocate all files to folder'
The backup stores the original location of the database files and, by default, attempts to restore to the same location. Since your new server installation is in new directories and, presumably, the old directories no longer exist, you need to alter the directories from the defaults to match the location you wish it to use.
Depending on how you are restoring the database, the way to do this will differ. If you're using SSMS, look through the tabs and lists until you find the list of files and their associated disk locations - you can then edit those locations before restoring.
I have managed to do this from code. This was not enough
Restore bkp = new Restore();
bkp.PercentCompleteNotification = 1;
bkp.Action = RestoreActionType.Database;
bkp.Database = sDatabase;
bkp.ReplaceDatabase = true;
The RelocateFiles property must be filled with the names and paths of the files to be relocated. For each file you must specify the name of the file and the new physical path. So what I did was looking at the PrimaryFilePath of the database I was restoring to, and use that as the physical location. Something like this:
if (!string.IsNullOrEmpty(sDataFileName) && !File.Exists(sDataFileName))
{
if (originaldb != null)
{
if (string.Compare(Path.GetDirectoryName(sDataFileName), originaldb.PrimaryFilePath, true) != 0)
{
string sPhysicalDataFileName = Path.Combine(originaldb.PrimaryFilePath, sDatabase + ".MDF");
bkp.RelocateFiles.Add(new RelocateFile(sLogicalDataFileName, sPhysicalDataFileName));
}
}
}
Same for the log file.
I had the same problem, and this fixed it without any C# code:
USE [master]
ALTER DATABASE [MyDb]
SET SINGLE_USER WITH ROLLBACK IMMEDIATE
RESTORE DATABASE [MyDb]
FROM DISK = N'D:\backups\mydb.bak'
WITH FILE = 1,
MOVE N'MyDb' TO N''c:\valid_data_path\MyDb.mdf',
MOVE N'MyDb_log' TO N'\valid_log_path\MyDb.ldf',
NOUNLOAD,
REPLACE,
STATS = 5
ALTER DATABASE [MyDb] SET MULTI_USER
GO
As has already been said a few times, restoring a backup where the new and old paths for the mdf and ldf files don't match can cause this error. There are several good examples here already of how to deal with that with SQL, none of them however worked for me until I realised that in my case I needed to include the '.mdf' and '.ldf' extensions in the from part of the 'MOVE' statement, e.g.:
RESTORE DATABASE [SomeDB]
FROM DISK = N'D:\SomeDB.bak'
WITH MOVE N'SomeDB.mdf' TO N'D:\SQL Server\MSSQL12.MyInstance\MSSQL\DATA\SomeDB.mdf',
MOVE N'SomeDb_log.ldf' TO N'D:\SQL Server\MSSQL12.MyInstance\MSSQL\DATA\SomeDB_log.ldf'
Hope that saves someone some pain, I could not understand why SQL was suggesting I needed to use the WITH MOVE option when I already was doing so.
Please try to uncheck the “Tail-Log Backup” option on the Options page of the Restore Database dialog
There is some version issue in this. You can migrate your database to 2012 by 2 another methods:-
1) take the database offline > copy the .mdf and .ldf files to the target server data folder and attach the database. refer this:-
https://dba.stackexchange.com/questions/30440/how-do-i-attach-a-database-in-sql-server
2) Create script of the whole database with schema & Data and run it on the target server(very slow process takes time). refer this:-
Generate script in SQL Server Management Studio
Try restarting the SQL Service. Worked for me.
Just in case this is useful for someone working directly with Powershell (using the SMO library), in this particular case there were secondary data files as well. I enhanced the script a little by killing any open processes and then doing the restore.
Import-module SQLPS
$svr = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "server name";
$svr.KillAllProcesses("database_name");
$RelocateData1 = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=13.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" ("primary_logical_name","C:\...\SQLDATA\DATA\database_name.mdf")
$RelocateData2 = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=13.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" ("secondary_logical_name_2","C:\...\SQLDATA\DATA\secondary_file_2.mdf")
$RelocateData3 = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=13.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" ("secondary_logical_name_3","C:\...\SQLDATA\DATA\secondary_file_3.mdf")
$RelocateLog = New-Object "Microsoft.SqlServer.Management.Smo.RelocateFile, Microsoft.SqlServer.SmoExtended, Version=13.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" ("database_name_log","C:\...\SQLDATA\LOGS\database_name_log.ldf")
Restore-SqlDatabase -ServerInstance "server-name" -Database "database_name" -BackupFile "\\BACKUPS\\database_name.bak" -RelocateFile #($RelocateData1, $RelocateData2, $RelocateData3, $RelocateLog) -ReplaceDatabase
You should remove these lines from your script.
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'StudentManagement', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQLEXPRESS\MSSQL\DATA\StudentManagement.mdf' , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'StudentManagement_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.SQLEXPRESS\MSSQL\DATA\StudentManagement_log.ldf' , SIZE = 5696KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO
ALTER DATABASE [StudentManagement] SET COMPATIBILITY_LEVEL = 110
GO
IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [StudentManagement].[dbo].[sp_fulltext_database] #action = 'enable'
end
GO
ALTER DATABASE [StudentManagement] SET ANSI_NULL_DEFAULT OFF
GO
ALTER DATABASE [StudentManagement] SET ANSI_NULLS OFF
GO
ALTER DATABASE [StudentManagement] SET ANSI_PADDING OFF
GO
ALTER DATABASE [StudentManagement] SET ANSI_WARNINGS OFF
GO
ALTER DATABASE [StudentManagement] SET ARITHABORT OFF
GO
ALTER DATABASE [StudentManagement] SET AUTO_CLOSE OFF
GO
ALTER DATABASE [StudentManagement] SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE [StudentManagement] SET AUTO_SHRINK OFF
GO
ALTER DATABASE [StudentManagement] SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE [StudentManagement] SET CURSOR_CLOSE_ON_COMMIT OFF
GO
ALTER DATABASE [StudentManagement] SET CURSOR_DEFAULT GLOBAL
GO
ALTER DATABASE [StudentManagement] SET CONCAT_NULL_YIELDS_NULL OFF
GO
ALTER DATABASE [StudentManagement] SET NUMERIC_ROUNDABORT OFF
GO
ALTER DATABASE [StudentManagement] SET QUOTED_IDENTIFIER OFF
GO
ALTER DATABASE [StudentManagement] SET RECURSIVE_TRIGGERS OFF
GO
ALTER DATABASE [StudentManagement] SET DISABLE_BROKER
GO
ALTER DATABASE [StudentManagement] SET AUTO_UPDATE_STATISTICS_ASYNC OFF
GO
ALTER DATABASE [StudentManagement] SET DATE_CORRELATION_OPTIMIZATION OFF
GO
ALTER DATABASE [StudentManagement] SET TRUSTWORTHY OFF
GO
ALTER DATABASE [StudentManagement] SET ALLOW_SNAPSHOT_ISOLATION OFF
GO
ALTER DATABASE [StudentManagement] SET PARAMETERIZATION SIMPLE
GO
ALTER DATABASE [StudentManagement] SET READ_COMMITTED_SNAPSHOT OFF
GO
ALTER DATABASE [StudentManagement] SET HONOR_BROKER_PRIORITY OFF
GO
ALTER DATABASE [StudentManagement] SET RECOVERY SIMPLE
GO
ALTER DATABASE [StudentManagement] SET MULTI_USER
GO
ALTER DATABASE [StudentManagement] SET PAGE_VERIFY CHECKSUM
GO
ALTER DATABASE [StudentManagement] SET DB_CHAINING OFF
GO
ALTER DATABASE [StudentManagement] SET FILESTREAM( NON_TRANSACTED_ACCESS = OFF )
GO
ALTER DATABASE [StudentManagement] SET TARGET_RECOVERY_TIME = 0 SECONDS
This usually happens, when you are using one MSSQL Studio for backup (connected to old server) and restore (connected to new one). Just make sure you are executing the restore on the correct server. Either check the server name and IP in the left pane in UI or dou
If you're doing this with C#, and the physical paths are not the same, you need to use RelocateFiles, as one answer here also mentioned.
For most cases, the below code will work, assuming:
You're just restoring a backup of a database from elsewhere, otherwise meant to be identical. For example, a copy of production to a local Db.
You aren't using an atypical database layout, for example one where the rows files are spread across multiple files on multiple disks.
In addition, the below is only necessary on first restore. Once a single successful restore occurs, the below file mapping will already be setup for you in Sql Server. But, the first time - restoring a bak file to a blank db - you basically have to say, "Yes, use the Db files in their default, local locations, instead of freaking out" and you need to tell it to keep things in the same place by, oddly enough, telling it to relocate them:
var dbDataFile = db.FileGroups[0].Files[0];
restore.RelocateFiles.Add(new RelocateFile(dbDataFile.Name, dbDataFile.FileName));
var dbLogFile = db.LogFiles[0];
restore.RelocateFiles.Add(new RelocateFile(dbLogFile.Name, dbLogFile.FileName));
To better clarify what a typical case would be, and how you'd do the restore, here's the full code for a typical restore of a .bak file to a local machine:
var smoServer = new Microsoft.SqlServer.Management.Smo.Server(
new Microsoft.SqlServer.Management.Common.ServerConnection(sqlServerInstanceName));
var db = smoServer.Databases[dbName];
if (db == null)
{
db = new Microsoft.SqlServer.Management.Smo.Database(smoServer, dbName);
db.Create();
}
restore.Devices.AddDevice(backupFileName, DeviceType.File);
restore.Database = dbName;
restore.FileNumber = 0;
restore.Action = RestoreActionType.Database;
restore.ReplaceDatabase = true;
var dbDataFile = db.FileGroups[0].Files[0];
restore.RelocateFiles.Add(new RelocateFile(dbDataFile.Name, dbDataFile.FileName));
var dbLogFile = db.LogFiles[0];
restore.RelocateFiles.Add(new RelocateFile(dbLogFile.Name, dbLogFile.FileName));
restore.SqlRestore(smoServer);
db.SetOnline();
smoServer.Refresh();
db.Refresh();
This code will work whether you've manually restored this Db before, created one manually with just the name and no data, or done nothing - started with a totally blank machine, with just Sql Server installed and no databases whatsoever.
Please change the .mdf file path. Just create a folder in any drive, ie - in "D" drive, just create a folder with custom name (dbase) and point the path to the new folder, mssql will automatically create the files.
"C:\PROGRAM FILES\MICROSOFT SQL SERVER\MSSQL.1\MSSQL\DATA\MYDB_ABC.MDF"
to
"D:\dbase\MYDB_ABC.MDF"

How long should SET READ_COMMITTED_SNAPSHOT ON take?

How long should it take to run
ALTER DATABASE [MySite] SET READ_COMMITTED_SNAPSHOT ON
I just ran it and it's taken 10 minutes.
How can I check if it is applied?
You can check the status of the READ_COMMITTED_SNAPSHOT setting using the sys.databases view. Check the value of the is_read_committed_snapshot_on column. Already asked and answered.
As for the duration, Books Online states that there can't be any other connections to the database when this takes place, but it doesn't require single-user mode. So you may be blocked by other active connections. Run sp_who (or sp_who2) to see what else is connected to that database.
Try this:
ALTER DATABASE generic SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE
OK (I am the original questioner) so it turns out this whole time I didn't even have the darn thing enabled.
Here's the ultimate code to run to enable snapshot mode and make sure it is enabled.
SELECT is_read_committed_snapshot_on, snapshot_isolation_state_desc,snapshot_isolation_state FROM sys.databases WHERE name='shipperdb'
ALTER DATABASE shipperdb SET allow_snapshot_isolation ON
ALTER DATABASE shipperdb SET SINGLE_USER WITH ROLLBACK IMMEDIATE
ALTER DATABASE shipperdb SET read_committed_snapshot ON
ALTER DATABASE shipperdb SET MULTI_USER
SELECT is_read_committed_snapshot_on, snapshot_isolation_state_desc,snapshot_isolation_state FROM sys.databases WHERE name='shipperdb'
This works even with connections active (presumably you're fine with them getting kicked out).
You can see the before and after state and this should run almost immediately.
IMPORTANT:
The option READ_COMMITTED_SNAPSHOT above corresponds to IsolationLevel.ReadCommitted in .NET
The option ALLOW_SNAPSHOT_ISOLATION above corresponds to IsolationLevel.Snapshot in .NET
Great article about different versioning
.NET Tips:
Looks like Isolationlevel.ReadCommitted is allowed in code even if not enabled by the database. No warning is thrown. So do yourself a favor and be sure it is turned on before you assume it is for 3 years like I did!!!
If you're using C# you probably want the ReadCommitted IsolationLevel and not Snapshot - unless you are doing writes in this transaction.
READ COMMITTED SNAPSHOT does optimistic reads and pessimistic writes. In contrast, SNAPSHOT does optimistic reads and optimistic writes. (from here)
bool snapshotEnabled = true;
using (var t = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted
}))
{
using (var shipDB = new ShipperDBDataContext())
{
}
}
In additional you may get an error about being 'unable to promote' a transaction. Search for 'promotion' in Introducing System.Transactions in the .NET Framework 2.0.
Unless you're doing something special like connecting to an external database (or second database) then something as simple as creating a new DataContext can cause this. I had a cache that 'spun up' its own datacontext at initialization and this was trying to escalate the transaction to a full distributed one.
The solution was simple :
using (var tran = new TransactionScope(TransactionScopeOption.Suppress))
{
using (var shipDB = new ShipperDBDataContext())
{
// initialize cache
}
}
See also Deadlocked article by #CodingHorror
Try this code:
if(charindex('Microsoft SQL Server 2005',##version) > 0)
begin
declare #sql varchar(8000)
select #sql = '
ALTER DATABASE ' + DB_NAME() + ' SET SINGLE_USER WITH ROLLBACK IMMEDIATE ;
ALTER DATABASE ' + DB_NAME() + ' SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE ' + DB_NAME() + ' SET MULTI_USER;'
Exec(#sql)
end
I tried the command:
ALTER DATABASE MyDB SET READ_COMMITTED_SNAPSHOT ON
GO
against a dev box but the it took 10+ minutes and so I killed it.
I then found this:
https://willwarren.com/2015/10/12/sql-server-read-committed-snapshot/
and used his code block (which took about 1:26 to run):
USE master
GO
/**
* Cut off live connections
* This will roll back any open transactions after 30 seconds and
* restricts access to the DB to logins with sysadmin, dbcreator or
* db_owner roles
*/
ALTER DATABASE MyDB SET RESTRICTED_USER WITH ROLLBACK AFTER 30 SECONDS
GO
-- Enable RCSI for MyDB
ALTER DATABASE MyDB SET READ_COMMITTED_SNAPSHOT ON
GO
-- Allow connections to be established once again
ALTER DATABASE MyDB SET MULTI_USER
GO
-- Check the status afterwards to make sure it worked
SELECT is_read_committed_snapshot_on
FROM sys.databases
WHERE [name] = 'MyDB '
Try use master database before altering current database.
USE Master
GO
ALTER DATABASE [YourDatabase] SET READ_COMMITTED_SNAPSHOT ON
GO
I didn't take a second for me when i changed my DB to single user
All you need to do is this:
ALTER DATABASE xyz SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE;
No need to put the database into single user mode.
You will rollback uncommitted transactions though.
Try Shut off the other SQL services so that only the SQL server service is running.
Mine ran for 5 minutes then I cancelled it because it was obvious nothing was happening. Its a brand new server so there are no other users connected. I shut off the SQL Reporting Services and then ran it again.. took less than a second to complete.
With "ROLLBACK IMMEDIATE" it took about 20-30 seconds on my db which is 300GB.
ALTER DATABASE DBNAME SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE

Resources