SQL - single user database vs offline database - sql-server

i am trying to restore database, but it gives me error of exclusive rights.
now i can set my database to single user or set offline to restore it.
so, my question is what is difference between both of these?
which is better way to get around these?

Database Single-user mode:-
Single-user mode specifies that only one user at a time can access the database and is generally used for maintenance actions
Limitations and Restrictions:-
If other users are connected to the database at the time that you set the database to single-user mode, their connections to the database will be closed without warning.
The database remains in single-user mode even if the user that set the option logs off. At that point, a different user, but only one, can connect to the database.
Database Offline Mode:-
Database is unavailable. A database becomes offline by explicit user action and remains offline until additional user action is taken

Offline means that nobody can access the database. Single user means that only one person can, presumably you. I dont think it matters which way you go to be honest.
A better way to get around these two options? It would take a bit of extra work to allow only reads while restoring. It gets really complicated if you want the database to allow writes while you do a restore operation.
If there are no open connections to the database when you do the restore you can leave it online but it is probably not a good practice depending on your specific situation.

Related

Sql server database goes in Single User Mode while dropping it

Our application uses multiple databases and these databases can be created by user through UI. Basically these databases are created after loading data from data files (ETL process)
We delete these databases when they are not required.
We use following statement to delete it -
ALTER DATABASE [{0}] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Drop Database [{0}]
Recently we started facing an issue where the database goes into Single User Mode but DB is not deleted and application stops working because only one connection can be active at a time in this mode. This issue occurs very rarely and not consistent at all.
We don't have any clue on what is happening here.
If anybody has faced such an issues or what might be possible cause, please let me know.
We are using Sql Server 2008 R2
Regards,
Varun
You're probably running into a race condition. Between the time the database is set to single user and the time the DROP DATABASE command is issued if any other connection is successful then you will be unable to drop the database. In a high volume situation this can be difficult to resolve.
The best bet is to set the database offline instead of putting it into SINGLE_USER. Of course, in that case, you'll have to manually delete the database files.
I work with a particularly large environment where there are at any given time 50+ machines which are hammering the database for connections. Even with delays set between connections the number of attempts being made is extremely large.
In our case, we handled this race condition by disabling the service account which was attempting to access the database before performing the single-user and drop commands which eliminated the issue for us.
If your system features a similar common service account, you may be able to implement something similar.
The Set a Database to Single-user Mode documentation says:
Prerequisites
Before you set the database to SINGLE_USER, verify that the
AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF. When this option is
set to ON, the background thread that is used to update statistics
takes a connection against the database, and you will be unable to
access the database in single-user mode
Otherwise, the connection that sets the database to SINGLE_USER is supposed to immediately become the current and single user of that database, until you close that connection (notice that it can remain "in use", if your system is making use of connection pools that hold the connection open, but that should be unrelated to your problem).

Sybase: Kick all connections before load

We have an automated script to restore a Sybase database and then run our automated tests against it. Quite often we have a web server or interactive query tool connected to this database. These connections prevent the Sybase load with "...must have exclusive use of database to run load".
How do I kick/kill/terminate all connections?
I'd like something similar to Sql Server's alter database single_user with rollback immediate. This is a local Sybase instance so we have full admin rights.
Without knowing exactly what condition the script checks for, there are two things you need to do to guarantee exclusive use of a database (i) run "master..sp_dboption 'your-db-name', 'single user', false" to put it in single-user mode, and (ii) kill all existing users first with the "kill" command.
This is not difficult to put in a stored procedure -- kill all connections using your database as their current database or having a lock in that database, and then try to set it to single-user mode. Then check if the single-user mode succeeded -- you should allow for a repeated try since it is possible that a new user has connected again just when you're setting it to single-user mode.
This is all not difficult to implement, but you will need some understanding of the ASE system tables. But primarily, I think you need to figure out exactly what it is your load script assumes to be the case and what it checks for.
There may be other solutions as well: if you can just lock the tables affected by your load script for example, that may also be a solution (and a simpler one). But this may or may not be possible, depending on what the load script exactly does and what is expects. So that would be question #1 to answer.
HTH,
Rob V.

Is it possible to have secondary server available read-only in a log shipping scenario?

I am looking into using log shipping in a SQL Server 2005 environment. The idea was to set up frequent log shipping to a secondary server. The intent: Use the secondary server to serve report queries, thereby offloading the primary db server.
I came across this on a sqlservercentral forum thread:
When you create the log shipping you have 2 choices. You can configure restore log operation to be done with norecovery or with standby option. If you use the norecovery option, you can not issue select statements on it. If instead of norecovery you use the standby option, you can run select queries on the database.
Bear in mind with the standby option when log file restores occur users will be kicked out without warning by the restore process. Acutely when you configure the log shipping with standby option, you can also select between 2 choices – kill all processes in the secondary database and perform log restore or don’t perform log restore if the database is being used. Of course if you select the second option, the restore operation might never run if someone opens a connection to the database and doesn’t close it, so it is better to use the first option.
So my questions are:
Is the above true? Can you really not use log shipping in the way I intend?
If it is true, could someone explain why you cannot execute SELECT statements to a database while the transaction log is being restored?
EDIT:
First question is duplicate of this serverfault question. But I still would like the second question answered: Why is it not possible to execute SELECT statements while the transaction log is being restored?
could someone explain why you cannot
execute SELECT statements to a
database while the transaction log is
being restored?
Short answer is that RESTORE statement takes an exclusive lock on the database being restored.
For writes, I hope there is no need for me to explain why they are incompatible with a restore. Why does it not allow reads either? First of all, there is no way to know if a session that has a lock on a database is going to do a read or a write. But even if it would be possible, restore (log or backup) is an operation that updates directly the data pages in the database. Since these updates go straight to the physical location (the page) and do not follow the logical hierarchy (metadata-partition-page-row), they would not honor possible intent locks from other data readers, and thus have the possibility to change structures as they are read. A SELECT table scan following the page next-prev pointers would be thrown into disarray, resulting in a corrupted read.
Well yes and no.
You can do exactly what you wish to do, in that you may offload reporting workloads to a secondary server by configuring Log Shipping to a read only copy of a database. I have set this type of architecture up on a number of occasions previously and it works very well indeed.
The caveat is that in order to perform a restore of a Transaction Log Backup file there must be no other connections to the database in question. Hence the two choices being, when the restore process runs it will either fail, thereby prioritising user connections, or it will succeed by disconnecting all user connection in order to perform the restore.
Dependent on your restore frequency this is not necessarily a problem. You simply educate your users to the fact that, say every hour at 10 past the hour, there is a possibility that your report may fail. If this happens simply re-run the report.
EDIT: You may also want to evaluate alternative architeciture solutions to your business need. For example, Transactional Replication or Database Mirroring with a Database Snapshot
If you have enterprise version, you can use database mirroring + snapshot to create read-only copy of the database, available for reporting, etc. Mirroring uses "continuous" log shipping "under the hood". It is frequently used in scenario you have described.
Yes it's true.
I think the following happens:
While the transaction log is being restored, the database is locked, as large portions of it are being updated.
This is for performance reasons more then anything else.
I can see two options:
Use database mirroring.
Schedule the log shipping to only occur when the reporting system is not in use.
Slight confusion in that, the norecovery flag on the restore means your database is not going to be brought out of a recovery state and into an online state - that is why the select statements will not work - the database is offline. The no-recovery flag is there to allow you to restore multiple log files in a row (in a DR type scenario) without bringing the database back online.
If you did not want to log ship / have the disadvantages you could swap to a one way transactional replication, but the overhead / set-up will be more complex overall.
Would peer-to-peer replication work. Then you can run queries on one instance and so save the load on the original instance.

Oracle Backup Database with sqlplus it's possible?

I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.

Is there a way to make transactions or connections read only in SQL Server?

I need a quick "no" for DELETE/UPDATE/INSERT, since 3p reporting tool allows users to write their own SQL.
I know that I should probably add a new user and set permissions on tables/sp/views/etc..., and then create a new connection as restricted user.
Is there a quicker way to force a transaction or connection in SQL Server to read only mode?
I don't know. If the 3P tool is that crazy, I would be completely paranoid about what I exposed to it. I think that setting up a new user is the best thing. Maybe even just giving them certian views and/or stored procs and calling it a day.
Why are you worried about your users' ability to put arbitrary SQL in their reporting queries? If they have the rights to change data in your database, surely they can just connect to it with any ODBC client and execute the SQL directly.
I'm not sure it's 3P that's the issue here, it sounds more like you need to restrict your users but haven't.
If you have a class of users who shouldn't be allowed to change your data, then set their accounts up that way. Relying on the fact that they'll only use a reporting tool that doesn't let them change data is a security hole I could drive a truck through.
If they are allowed to change the data, restricting sessions from 3P won't help secure your system.
Unless I've misunderstood your set-up. I've been wrong before, just ask my wife. In which case, feel free to educate me.
Does it have to be with named users ? I have a "report" user and a "browser" user that just has select rights on most tables. Anyone that needs data uses those accounts and since they are select only I don't have to worry about them.
See Kern's link.
Change the permissions for the user (the one used in the connection string) on the SQL Server.
If you have control when the connection is created and closed the you could perform a BEGIN TRAN and then do a ROLLBACK at the end. That way anything this reporting tool does will be rolled back at the end. However, if it has the ability to manage these transactions or new connections, or if the user base is unknown and potentially malicious then it is not foolproof. In addition, any large transaction may result in your database being locked by your users actions
I have to say though, the real answer is security is allocated to users. The "quicker" way you're after is a new user with just read only permissions.

Resources