I am trying to setup snowflake and need some help.Data Engineers in the organisation have clone DBs to get their work done and they frequently drop the clone DB. Someone can accidentally delete the production DB and I want to prevent that.
I want to make it impossible to delete production database.At the same time we have clone database which the user should be able to delete. I know about "undrop" function but dont want matters to escalate till that.
Data Engineers should be able to do the rest of the queries on the Production DB like drop table/view etc. They test the code in clone and once satisfied update the code in the production. I want to restrict just the "drop function" of the whole production DB.
-- when running the below query snowflake should throw a notification that the user don't have privileges to run the below query (this is production DB). (Owner sysadmin)
DROP DATABASE PRODUCTION_DB;
--Run the below code without any issues (Below is a clone of the production DB).(Owner sysadmin)
DROP DATABASE PRODUCTION_DB_CLONE;
Role of data engineers (sysadmin).
Only the database owner (i.e. the role with the OWNERSHIP privilege on the database) can drop the database.
Granting the required privileges to a different role(for data engineers) to perform the operations other than the database owner(Role which owns the OWNERSHIP privilege) will help avoid these unexpected issues.
Related
I need our end-users to "clone" a database from our web application UI. I am able to "clone" a database by backing up a source database and restoring it into a new database. In this way, I "clone" the table schema and data.
My question is - is there any way I can "clone" just the table schema, without the data? I am aware that we can script the database manually, and run that script. But our table schema changes frequently (we add new columns and tables regularly) and we wouldn't want to update this script. Thank you.
basically you have to:
save the datatase
restore the backup to a new database (use the with move option)
connect to the new database and then exec sys.sp_MSforeachtable 'truncate table ?'
PS: you may have to disable FK constraints if any.
Antother solution: use Entity Framework Database First to build a small program. Launch said program with a connection string pointing to a new db.
Right Click over database -> Tasks -> Generate Script
On Set Scripting Options page click Advanced
General -> Types of data to script = Schema Only
With SQL Server data tools you can generate the schema difference and apply only the changes to target database.
I have MS Access as a front end and PostgreSQL as back end for my database. So I set up the database in PostgreSQL and linked the tables to MS Access using the ODBC drivers. Everything works great, I can update the tables in MS Access and the record will appear in Postgres database.
Since I can still see the linked tables in MS Access, I feel like it is possible for some users to go in and manually modify the tables without filling out proper forms. Is it possible to HIDE the tables or lock the tables so that Access users cannot modify the raw data at all? If not, what can I do to secure the integrity of the database.
Thanks!
I would recommend looking at Postgres privileges as a way to lock the tables down.
In short, you could have your backend run as one user that has full access permissions on the tables in question, and when the users login to the app, they would be connected to Postgres using a user whose privileges are considerably more locked down (say, read only if you just want to be able to do SELECTs to surface data).
For example, you could run the following SQL against your Postgres server:
REVOKE ALL ON accounts FROM joe;
GRANT SELECT ON accounts TO joe;
Which would first remove all privileges from the user joe for the table accounts, and then allow only SELECT priveleges for that table.
You could do something similar for all the tables you wish to lock down. You'll also need to do the same for the sequences used by those tables.
You may wish to create a special readonly user which has only read access across the board, and use those credentials to surface the Postgres data for the users to access.
When you need to alter data, your backend could specifically use a power user of sorts which has much greater access.
Here's a link which details creating a readonly Postgres user (for purposes of backups in this case, but the general concept and the SQL commands should apply (just ignore the stuff about pg_dump).
If you aren't concerned about users' ability to modify the data in those tables via the up other than in the ways that are authorized, but are only concerned about them using, say, psql to go in and update them, then you probably don't need a readonly user, but can simply lock the tables down and have the backend use that user with sufficient access.
I have always worried about users deleting lotus notes databases by accident. We had one such case last week and I want to know how we can lock the databases so that users cannot delete them.
Is there a way in Lotus Notes - either at the Db level or server level where we can set up the database so that only the administer of the db or the user with Manager access can delete the database?
A user can only delete a server-based database if the user has Manager rights to the database. Just confirm that your users do not have Manager rights (use the "Effective Access" button on the ACL dialog window to check this).
the good practice is to have separate ID for such Hardcore operations, like Deletions, users deletions and so on, so if you want to delete something, switch to another id and then delete, other time you are like regular user. without MANAGER access :)
Dumb question - what's the best way to copy instances in an environment where I want to refresh a development server with instances from a production server?
I've done backup-restore, but I've heard detach-copy-attach and one guy even told me he would just copy the datafiles between the filesystems....
Are these the three (or two, the last one sounds kind of suspect) accepted methods?
My understanding is that the second method is faster but requires downtime on the source because of the detach aspect.
Also, in this situation (wanting an exact copy of production on a dev server) what's the accepted practice for transferring logins,etc.? Should I just backup and restore the user databases + master + msdb?
Easiest way is actually a script.
Run this on production:
USE MASTER;
BACKUP DATABASE [MyDatabase]
TO DISK = 'C:\temp\MyDatabase1.bak' -- some writeable folder.
WITH COPY_ONLY
This one command makes a complete backup copy of the database onto a single file, without interfering with production availability or backup schedule, etc.
To restore, just run this on your dev or test SQL Server:
USE MASTER;
RESTORE DATABASE [MyDatabase]
FROM DISK = 'C:\temp\MyDatabase1.bak'
WITH
MOVE 'MyDatabase' TO 'C:\Sql\MyDatabase.mdf', -- or wherever these live on target
MOVE 'MyDatabase_log' TO 'C:\Sql\MyDatabase_log.ldf',
REPLACE, RECOVERY
Then save these scripts on each server. One-click convenience.
Edit:
if you get an error when restoring that the logical names don't match, you can get them like this:
RESTORE FILELISTONLY
FROM disk = 'C:\temp\MyDatabaseName1.bak'
If you use SQL Server logins (not windows authentication) you can run this after restoring each time (on the dev/test machine):
use MyDatabaseName;
sp_change_users_login 'Auto_Fix', 'userloginname', null, 'userpassword';
The fastest way to copy a database is to detach-copy-attach method, but the production users will not have database access while the prod db is detached. You can do something like this if your production DB is for example a Point of Sale system that nobody uses during the night.
If you cannot detach the production db you should use backup and restore.
You will have to create the logins if they are not in the new instance. I do not recommend you to copy the system databases.
You can use the SQL Server Management Studio to create the scripts that create the logins you need. Right click on the login you need to create and select Script Login As / Create.
This will lists the orphaned users:
EXEC sp_change_users_login 'Report'
If you already have a login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user'
If you want to create a new login id and password for this user, fix it by doing:
EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password'
UPDATE:
My advice below tells you how to script a DB using SQL Server Management Studio, but the default settings in SSMS miss out all sorts of crucial parts of a database (like indexes and triggers!) for some reason. So, I created my own program to properly script a database including just about every type of DB object you may have added. I recommend using this instead. It's called SQL Server Scripter and it can be found here:
https://bitbucket.org/jez9999/sqlserverscripter
I'm surprised no-one has mentioned this, because it's really useful: you can dump out a database (its schema and data) to a script, using SQL Server Management Studio.
Right-click the database, choose "Tasks | Generate Scripts...", and then select to script specific database objects. Select the ones you want to copy over to the new DB (you probably want to select at least the Tables and Schemas). Then, for the "Set Scripting Options" screen, click "Advanced", scroll down to "Types of data to script" and select "Schema and data". Click OK, and finish generating the script. You'll see that this has now generated a long script for you that creates the database's tables and inserts the data into them! You can then create a new database, and change the USE [DbName] statement at the top of the script to reflect the name of the new database you want to copy the old one to. Run the script and the old database's schema and data will be copied to the new one!
This allows you to do the whole thing from within SQL Server Management studio, and there's no need to touch the file system.
Below is what I do to copy a database from production env to my local env:
Create an empty database in your local sql server
Right click on the new database -> tasks -> import data
In the SQL Server Import and Export Wizard, select product env's servername as data source. And select your new database as the destination data.
Its hard to detach your production dB or other running dB's and deal with that downtime, so I almost always use a Backup / restore method.
If you also want to make sure to keep your login's in sync check out the MS KB article on using the stored proc sp_help_revlogin to do this.
The detach/copy/attach method will take down the database. That's not something you'd want in production.
The backup/restore will only work if you have write permissions to the production server. I work with Amazon RDS and I don't.
The import/export method doesn't really work because of foreign keys - unless you do tables one by one in the order they reference one another. You can do an import/export to a new database. That will copy all the tables and data, but not the foreign keys.
This sounds like a common operation one needs to do with database. Why isn't SQL Server handling this properly? Every time I had to do this it was frustrating.
That being said, the only painless solution I've encountered was Sql Azure Migration Tool which is maintained by the community. It works with SQL Server too.
I run an SP to DROP the table(s) and then use a DTS package to import the most recent production table(s) onto my development box.
Then I go home and come back the following morning. It's not elegant; but it works for me.
If you want to take a copy of a live database, do the Backup/Restore method.
[In SQLS2000, not sure about 2008:] Just keep in mind that if you are using SQL Server accounts in this database, as opposed to Windows accounts, if the master DB is different or out of sync on the development server, the user accounts will not translate when you do the restore. I've heard about an SP to remap them, but I can't remember which one it was.
In SQL Server 2005, a snapshot of a database can be created that allows read-only access to a database, even when the database is in "recovery pending" mode. One use case for this capability is in creating a reporting database that references a copy of a production database, which is kept current through log-shipping.
In this scenario, how can I implement security on the "snapshot" database that is different from the "production" source database?
For example, in the production database, all access to data is through stored procedures, while in the snapshot database users are allowed to select from table in the database for reporting purposes. The problem the I see is that security for the snapshot database is inherited from the source database, and can not be changed because snapshots are strictly read-only.
Are you able to manage permissions on this database? Would adding a separate user who only has read access to a database be sufficient for this type of scenario? This could be a read-only user on the main database, but is only effectively used on the snapshot db.
i.e. Add a new user, readerMan5000 who is only given select access, to the database in question. Then require users to authenticate through that new credential.
Note to future commenters, you may want to read:
http://www.simple-talk.com/sql/database-administration/sql-server-2005-snapshots/
or
http://msdn.microsoft.com/en-us/library/ms187054(SQL.90).aspx
before you open your big mouth like me. :)
You can't change permissions after you take the snapshot, but here's one workaround: instead of having them access the tables directly, require them to use views instead. If the views are used only for reporting, then you can set tight security on them in the original database, and then have the users hit those views in the snapshot. You'll need to restrict access on the underlying tables though if you want it to be effective.