I have database that is a datawarehouse environment that loads data with an ETL process.
During the ETL process I wish to make the database unavailable for querying for certain roles.
What would be a possible solution?
I think the easiest answer would be to REVOKE PERMISSIONS for the rolls in the ETL process and reverse it at the end (or on fail).
One option would be to create a stored procedure which modifies the permissions of the roles, then drops users connections, then following the data load you reset permissions.
An alternative to this is to run your ETL process when no one is using the system...
Related
An application vendor representative asked me to grant dbowner access to tempdb for their application login; to be able to create objects in tempdb without "#" or "##" prefixes.
I tried to convince him to forget asking for direct tempdb access by arguing that granting direct access may interfere with SQL Server engine internal operations and prevent tempdb cleanup processes to do their jobs correctly. And also there is another drawback on SQL Service restarts which causes any permission setting on tempdb to revert to defaults.
Is there anything that I might miss in this regard?
I am working on a database auditing solution and was thinking of having SQL Server triggers take care of changes and inserting them into an auditing table. Since this is a SQL Azure Database and will be fairly large I am concerned about the cost of a growing database due to auditing.
In order to cut down on the costs needed for auditing purposes, I am considering storing the audit table (or tables) in Azure Tables instead of Azure SQL databases. So the question becomes, how to get the SQL Server trigger to get the changed data into Azure Tables?
The only thing I can come up with is to have an audit table (or tables) in SQL Databases so the trigger can insert the rows locally, and then have a Worker Role every X seconds pull any rows from that and move them to Azure Tables and delete from the SQL Database table so it doesn't grow large.
Is there a better way to do this integration? Can I somehow put a message in a queue from a trigger?
Azure SQL Database (formerly SQL Azure) doesn't support CLR (hence no EXTERNAL NAME trigger parameter) so there's no way for your triggers to do anything outside of T-SQL. If you want audit content to go to a table, you could take the approach you came up with (temporarily write to SQL table, then move content periodically to Table). There are other approaches you could take (and this would be opinion/subjective, frowned upon here), but going with the queue concept for a minute, since you asked about queues, and illustrating what you could do with Azure Queues:
You could use an Azure queue to specify an item to insert/update in your SQL database. The queue processing code could then be responsible for performing the update and writing to the Azure table. Since the queue messages must be explicitly deleted after processing, you could simply repeat the queue message processing if something failed during execution (e.g. you write to SQL but fail before writing to table storage). The message eventually becomes visible for reading again, if you don't delete it before its timeout value. As long as your operations are idempotent, you'd be ok with this pattern.
A cheaper solution than using worker roles would be to use a combination of Azure Scheduled Tasks (you can enable them for free to run every 15 min within Mobile Apps) and Azure Web Sites. Basically the way it would work is to run this scheduled job every 15 min which would make an HTTP call to some code you have running within your Azure Web Site. This code would do the same work you had outlined for your worker role.
Alternatively, use SQL Server System-Versioned temporal tables to automatically handle the writing of audited record (i.e., changes) to corresponding history tables.
Our application uses multiple databases and these databases can be created by user through UI. Basically these databases are created after loading data from data files (ETL process)
We delete these databases when they are not required.
We use following statement to delete it -
ALTER DATABASE [{0}] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; Drop Database [{0}]
Recently we started facing an issue where the database goes into Single User Mode but DB is not deleted and application stops working because only one connection can be active at a time in this mode. This issue occurs very rarely and not consistent at all.
We don't have any clue on what is happening here.
If anybody has faced such an issues or what might be possible cause, please let me know.
We are using Sql Server 2008 R2
Regards,
Varun
You're probably running into a race condition. Between the time the database is set to single user and the time the DROP DATABASE command is issued if any other connection is successful then you will be unable to drop the database. In a high volume situation this can be difficult to resolve.
The best bet is to set the database offline instead of putting it into SINGLE_USER. Of course, in that case, you'll have to manually delete the database files.
I work with a particularly large environment where there are at any given time 50+ machines which are hammering the database for connections. Even with delays set between connections the number of attempts being made is extremely large.
In our case, we handled this race condition by disabling the service account which was attempting to access the database before performing the single-user and drop commands which eliminated the issue for us.
If your system features a similar common service account, you may be able to implement something similar.
The Set a Database to Single-user Mode documentation says:
Prerequisites
Before you set the database to SINGLE_USER, verify that the
AUTO_UPDATE_STATISTICS_ASYNC option is set to OFF. When this option is
set to ON, the background thread that is used to update statistics
takes a connection against the database, and you will be unable to
access the database in single-user mode
Otherwise, the connection that sets the database to SINGLE_USER is supposed to immediately become the current and single user of that database, until you close that connection (notice that it can remain "in use", if your system is making use of connection pools that hold the connection open, but that should be unrelated to your problem).
We have an automated script to restore a Sybase database and then run our automated tests against it. Quite often we have a web server or interactive query tool connected to this database. These connections prevent the Sybase load with "...must have exclusive use of database to run load".
How do I kick/kill/terminate all connections?
I'd like something similar to Sql Server's alter database single_user with rollback immediate. This is a local Sybase instance so we have full admin rights.
Without knowing exactly what condition the script checks for, there are two things you need to do to guarantee exclusive use of a database (i) run "master..sp_dboption 'your-db-name', 'single user', false" to put it in single-user mode, and (ii) kill all existing users first with the "kill" command.
This is not difficult to put in a stored procedure -- kill all connections using your database as their current database or having a lock in that database, and then try to set it to single-user mode. Then check if the single-user mode succeeded -- you should allow for a repeated try since it is possible that a new user has connected again just when you're setting it to single-user mode.
This is all not difficult to implement, but you will need some understanding of the ASE system tables. But primarily, I think you need to figure out exactly what it is your load script assumes to be the case and what it checks for.
There may be other solutions as well: if you can just lock the tables affected by your load script for example, that may also be a solution (and a simpler one). But this may or may not be possible, depending on what the load script exactly does and what is expects. So that would be question #1 to answer.
HTH,
Rob V.
I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.