An application vendor representative asked me to grant dbowner access to tempdb for their application login; to be able to create objects in tempdb without "#" or "##" prefixes.
I tried to convince him to forget asking for direct tempdb access by arguing that granting direct access may interfere with SQL Server engine internal operations and prevent tempdb cleanup processes to do their jobs correctly. And also there is another drawback on SQL Service restarts which causes any permission setting on tempdb to revert to defaults.
Is there anything that I might miss in this regard?
Related
With Windows SQL Server there are 3 user access mode settings per DB.
Multi_User
Single_User
Restricted_User
My question is, when exactly do you put a database in the "Single_User" of the "Restricted_User" mode?
For example, if you want to update the SQL Server and thus prevent further sessions from being established for the duration of the update?
Typical Restricted_User and Single_User are used when doing maintenance that needs to be done when the applications are offline but you still need to access the data or schema.
Examples
Data migrations that are spanning multiple hours/days accessing multiple tables /databases/files are often done when the system is offline to minimize locking/blocking.
Hardware migrations: Typically when moving to new hardware the database is also but in restricted mode before the service is turned off, to make some full backups, put the database offline, ...
Recovery: When your database is corrupted an you are restoring logs and performing ddbc checkdb (Although this is mostly done on a separate environment)
...
So basically when the DBAs/Developers wants to make sure nobody else can access the database but they still needs to be able to perform tasks.
In the enterprise environment this is often a fail-safe, as access to the database will probably be limited by firewalls, user policies when doing one of these tasks.
Patching SQL server or the OS is done when the service is stopped, as often OS patches require reboots and SQL Server patches require service restarts. When running in a clustered environment, it's done 1 node at the time, to maintain up-time. So restricted access is not used in these cases as the SQL Server is offline.
We have a big system running with thousands of users (some from android apps, other from the web app, etc.).
The system is distributed, with databases in two locations (within the same country). In one location there are 5 servers in the same network, and each one has a copy of the database (via replication).
Among the software developers, a few have direct access to the production databases. Sometimes due to technical support requested by users to modify some operations not possible from the system itself, the developers/support team have to access the database directly and modify some records.
We know this is not the ideal manner of working. But it's been like this since years.
Recently we have found a few problems. One day one person updated hundreds of records from a table by mistake.
Since then we are analyzing how to improve this access.
We are looking for some way of improving the security. We would like to have a two-phase authentication system in place. Something that asks the user for two passwords when accessing from Sql Server Management Studio...
Is that possible? Or is there any other approach we can use to improve the security but still allow devs/support team to access the production database when necessary?
Users also (currenty) have access via remote desktop to all servers.
At least we would like to KNOW when this access is being done.
Make access to PROD read only for those users. Allow them to write their scripts and then submit them for review at a minimum and testing if possible like any other deployable. Then follow standard deployment processes with someone who has access.
If my other answer isn't workable and these updates are always the same kind of fixes...you could create support stored procs maybe to do the fixes and only give permission on the procs...but this is highly dependent on the commonality of fixes being made and less preferable to my other answer.
I haven't used it myself but EXECUTE AS might let you give the users read-only permission while the procs would execute under credentials with higher access.
How can I force remote client applications to call my stored procedure?
I want to deny the direct execution of SQL statements from remote clients.
Can I do that?
Making the stored procedure do the entire job is it bad choice from performance perspective and security perspective
Thanks
can I do that ?
Sure. You create a user/login for the application to use and assign it to just the public role on the database. Then grant it execute permissions on the required stored procedures. The application generally doesn't need access to the underlying objects the stored procedure calls.
making the stored procedure do the entire job is it bad choice from performance perspective and security perspective
Performance will depend on the stored procedures and overall database design. There's no general reason performance couldn't be perfectly acceptable, however.
Security-wise, you only grant access to what you want the user/login to access and that's good. Again, there's no general reason security wouldn't be perfectly acceptable.
I have database that is a datawarehouse environment that loads data with an ETL process.
During the ETL process I wish to make the database unavailable for querying for certain roles.
What would be a possible solution?
I think the easiest answer would be to REVOKE PERMISSIONS for the rolls in the ETL process and reverse it at the end (or on fail).
One option would be to create a stored procedure which modifies the permissions of the roles, then drops users connections, then following the data load you reset permissions.
An alternative to this is to run your ETL process when no one is using the system...
I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.