Sybase: Kick all connections before load - sybase

We have an automated script to restore a Sybase database and then run our automated tests against it. Quite often we have a web server or interactive query tool connected to this database. These connections prevent the Sybase load with "...must have exclusive use of database to run load".
How do I kick/kill/terminate all connections?
I'd like something similar to Sql Server's alter database single_user with rollback immediate. This is a local Sybase instance so we have full admin rights.

Without knowing exactly what condition the script checks for, there are two things you need to do to guarantee exclusive use of a database (i) run "master..sp_dboption 'your-db-name', 'single user', false" to put it in single-user mode, and (ii) kill all existing users first with the "kill" command.
This is not difficult to put in a stored procedure -- kill all connections using your database as their current database or having a lock in that database, and then try to set it to single-user mode. Then check if the single-user mode succeeded -- you should allow for a repeated try since it is possible that a new user has connected again just when you're setting it to single-user mode.
This is all not difficult to implement, but you will need some understanding of the ASE system tables. But primarily, I think you need to figure out exactly what it is your load script assumes to be the case and what it checks for.
There may be other solutions as well: if you can just lock the tables affected by your load script for example, that may also be a solution (and a simpler one). But this may or may not be possible, depending on what the load script exactly does and what is expects. So that would be question #1 to answer.
HTH,
Rob V.

Related

SQL - single user database vs offline database

i am trying to restore database, but it gives me error of exclusive rights.
now i can set my database to single user or set offline to restore it.
so, my question is what is difference between both of these?
which is better way to get around these?
Database Single-user mode:-
Single-user mode specifies that only one user at a time can access the database and is generally used for maintenance actions
Limitations and Restrictions:-
If other users are connected to the database at the time that you set the database to single-user mode, their connections to the database will be closed without warning.
The database remains in single-user mode even if the user that set the option logs off. At that point, a different user, but only one, can connect to the database.
Database Offline Mode:-
Database is unavailable. A database becomes offline by explicit user action and remains offline until additional user action is taken
Offline means that nobody can access the database. Single user means that only one person can, presumably you. I dont think it matters which way you go to be honest.
A better way to get around these two options? It would take a bit of extra work to allow only reads while restoring. It gets really complicated if you want the database to allow writes while you do a restore operation.
If there are no open connections to the database when you do the restore you can leave it online but it is probably not a good practice depending on your specific situation.

Sql server database goes in Single User Mode while dropping it

Our application uses multiple databases and these databases can be created by user through UI. Basically these databases are created after loading data from data files (ETL process)
We delete these databases when they are not required.
We use following statement to delete it -
ALTER DATABASE [{0}] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Drop Database [{0}]
Recently we started facing an issue where the database goes into Single User Mode but DB is not deleted and application stops working because only one connection can be active at a time in this mode. This issue occurs very rarely and not consistent at all.
We don't have any clue on what is happening here.
If anybody has faced such an issues or what might be possible cause, please let me know.
We are using Sql Server 2008 R2
Regards,
Varun
You're probably running into a race condition. Between the time the database is set to single user and the time the DROP DATABASE command is issued if any other connection is successful then you will be unable to drop the database. In a high volume situation this can be difficult to resolve.
The best bet is to set the database offline instead of putting it into SINGLE_USER. Of course, in that case, you'll have to manually delete the database files.
I work with a particularly large environment where there are at any given time 50+ machines which are hammering the database for connections. Even with delays set between connections the number of attempts being made is extremely large.
In our case, we handled this race condition by disabling the service account which was attempting to access the database before performing the single-user and drop commands which eliminated the issue for us.
If your system features a similar common service account, you may be able to implement something similar.
The Set a Database to Single-user Mode documentation says:
Prerequisites
Before you set the database to SINGLE_USER, verify that the
AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF. When this option is
set to ON, the background thread that is used to update statistics
takes a connection against the database, and you will be unable to
access the database in single-user mode
Otherwise, the connection that sets the database to SINGLE_USER is supposed to immediately become the current and single user of that database, until you close that connection (notice that it can remain "in use", if your system is making use of connection pools that hold the connection open, but that should be unrelated to your problem).

Audit on oracle schema for dml statements

I need to secure an oracle user for doing inserts/updates/deletes from outside programs written by me.
I googled a bit around to find what I need. I know you can use own written db triggers.
And I now there are two major systems from oracle (at least that is what I found).
You can use fine grained auditing. And you can use the audit trail.
I think in my case the audit trail comes close but just isn't what I am looking for. Because I need to now from which program the connection to the DB is coming. For example I need to register all connections that are doing inserts/updates/deletes with there statements executed that are coming from sql developer or toad. But all the other connections may pass without audit.
On daily basis I have lots of connections so registering everything is too much overload.
I hope one of you have a good idea on how to set this up.
Regards
You can use a product of Oracle: Oracle Audit Vault and Database Firewall. Because you want to know also from which program the connection comes, you need the Database Firewall. It can monitor all traffic through the database, specifying the IP address and the client from which the connection was started. You can also specify if you want to audit DML or DDL,or other statements. Data is stored locally in the product's database,not in the secure target (your database). Just have a look, it it just what you need: http://www.oracle.com/technetwork/products/audit-vault-and-database-firewall/overview/overview-1877404.html

Is it possible to have secondary server available read-only in a log shipping scenario?

I am looking into using log shipping in a SQL Server 2005 environment. The idea was to set up frequent log shipping to a secondary server. The intent: Use the secondary server to serve report queries, thereby offloading the primary db server.
I came across this on a sqlservercentral forum thread:
When you create the log shipping you have 2 choices. You can configure restore log operation to be done with norecovery or with standby option. If you use the norecovery option, you can not issue select statements on it. If instead of norecovery you use the standby option, you can run select queries on the database.
Bear in mind with the standby option when log file restores occur users will be kicked out without warning by the restore process. Acutely when you configure the log shipping with standby option, you can also select between 2 choices – kill all processes in the secondary database and perform log restore or don’t perform log restore if the database is being used. Of course if you select the second option, the restore operation might never run if someone opens a connection to the database and doesn’t close it, so it is better to use the first option.
So my questions are:
Is the above true? Can you really not use log shipping in the way I intend?
If it is true, could someone explain why you cannot execute SELECT statements to a database while the transaction log is being restored?
EDIT:
First question is duplicate of this serverfault question. But I still would like the second question answered: Why is it not possible to execute SELECT statements while the transaction log is being restored?
could someone explain why you cannot
execute SELECT statements to a
database while the transaction log is
being restored?
Short answer is that RESTORE statement takes an exclusive lock on the database being restored.
For writes, I hope there is no need for me to explain why they are incompatible with a restore. Why does it not allow reads either? First of all, there is no way to know if a session that has a lock on a database is going to do a read or a write. But even if it would be possible, restore (log or backup) is an operation that updates directly the data pages in the database. Since these updates go straight to the physical location (the page) and do not follow the logical hierarchy (metadata-partition-page-row), they would not honor possible intent locks from other data readers, and thus have the possibility to change structures as they are read. A SELECT table scan following the page next-prev pointers would be thrown into disarray, resulting in a corrupted read.
Well yes and no.
You can do exactly what you wish to do, in that you may offload reporting workloads to a secondary server by configuring Log Shipping to a read only copy of a database. I have set this type of architecture up on a number of occasions previously and it works very well indeed.
The caveat is that in order to perform a restore of a Transaction Log Backup file there must be no other connections to the database in question. Hence the two choices being, when the restore process runs it will either fail, thereby prioritising user connections, or it will succeed by disconnecting all user connection in order to perform the restore.
Dependent on your restore frequency this is not necessarily a problem. You simply educate your users to the fact that, say every hour at 10 past the hour, there is a possibility that your report may fail. If this happens simply re-run the report.
EDIT: You may also want to evaluate alternative architeciture solutions to your business need. For example, Transactional Replication or Database Mirroring with a Database Snapshot
If you have enterprise version, you can use database mirroring + snapshot to create read-only copy of the database, available for reporting, etc. Mirroring uses "continuous" log shipping "under the hood". It is frequently used in scenario you have described.
Yes it's true.
I think the following happens:
While the transaction log is being restored, the database is locked, as large portions of it are being updated.
This is for performance reasons more then anything else.
I can see two options:
Use database mirroring.
Schedule the log shipping to only occur when the reporting system is not in use.
Slight confusion in that, the norecovery flag on the restore means your database is not going to be brought out of a recovery state and into an online state - that is why the select statements will not work - the database is offline. The no-recovery flag is there to allow you to restore multiple log files in a row (in a DR type scenario) without bringing the database back online.
If you did not want to log ship / have the disadvantages you could swap to a one way transactional replication, but the overhead / set-up will be more complex overall.
Would peer-to-peer replication work. Then you can run queries on one instance and so save the load on the original instance.

Oracle Backup Database with sqlplus it's possible?

I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.

Resources