I'm working on a short-term disaster recovery planning effort at my company, and we're planning on using replicated reporting servers as warm spares in case our primary transaction server dies.
Our web application can to write to that transaction server, but with only certain rights given by a SQL role. (webapplication) I want to give us a way of updating this role that also updates the same role as it exists on other servers. This way, if we fail over to another server, our webapplication role is reasonably close to the same, if not exactly the same. (I'm not really worried about someone updating it directly)
So, I have a MakeWebWriteable procedure that should generate and execute some code like what is below. Except, what's below clearly won't work. I'm at a loss for how to reference the role and update it on a remote server. I thought about using exec (#sql) at [reporting\server], but I'm not sure how I would reference a certain database's role object within that.
grant insert, update, delete on dbo.TableName to webapplication
grant insert, update, delete on [reporting\server].DBName.dbo.TableName to [reporting\server].DBName.dbo.webapplication
How might I do this, or are there any better ideas? (i.e. replication)
edit 1: We generally write migrations as SQL scripts, commit those to SVN, and have our databases updated with a syncing script - sort of like what the process in RoR is, only without a model->SQL translation. Ideally, we would just put a line at the end of a migration in which we added a table like so, if we want the table to be web-writeable.
-- Code to create NewTable...
if object_id('SetWebWriteable') is not null
exec SetWebWriteable #tableName = 'NewTable'
This way, nothing happens on our developer machines, but in our test and production environments, the correct actions occur. If the role can be replicated automatically, then naturally we wouldn't need to do this.
From what You are describing, You need SQLCMD utility. Connect to any server using the credentials it accepts (I usually use -E which represents windows authentication) and execute whatever scripts YOu need. If You are not trusted there, create an account with sufficient credentials.
The question is: What is the trigger of this script to execute? Do YOu execute it manually, or should it fire up automatically. In this case You are bound to write some code to achieve that (not very difficult though).
And You can leverage powershell built into SQL Server 2008 which can operate on objects on ANY server You can connect to (with your credentials). Look into that. It has a cmdlet Invoke-Sqlcmd which helps a TON.
luke
Related
In my .NET WinForms application (in a Visual Studio solution), I have the tables and stored procedures in a SQL Server project within the solution so I can easily keep my schema under version control, and I can successfully use the 'Publish' feature to deploy schema changes to the development database.
I'm getting ready to deploy my application and have asked a user to trial the application on their PC against the development database prior to rolling out the new application and database schema changes to production company wide.
What I'm finding is that the application is throwing SqlException. I've managed to track this down to permissions on the new stored procedures (obviously, I don't have this issue as the owner of the stored procedures).
I can manually correct this by granting permissions on the stored procedure(s), as follows
GRANT EXECUTE ON [dbo].[<tablename(s)>] TO DatabaseUsers
...but what I'd ideally like to do is include this within the definition of the stored procedure(s) in the SQL Server project that's under version control.
I've tried adding the above statement to the end of the stored procedure definition (below) in the SQL Server project, the output from the deployment script seems to show the command being executed, however whilst it updates the stored procedure, it won't touch the permissions.
-- Snipped 50 lines above for brevity
OR c.name LIKE #search
OR CAST(it.id AS VARCHAR) LIKE #search
OR ig.name LIKE #search
ORDER BY it.id
END
GRANT EXECUTE ON [dbo].[search_items_allfields] TO DatabaseUsers
GO
I've also tried adding an additional GO before the GRANT statement in the above definition, but then I'm unable to use the publish script, as it refuses to run due to not being able to resolve the group 'DatabaseUsers' (without the GO, it's still unable to resolve it, but is happy enough to run it).
In addition to the GO before the GRANT (so it's not part of the procedure), you need to add a script for the role to your project to resolve the reference:
CREATE ROLE DatabaseUsers;
GO
Of course, you'll need to add role members too. I suggest you manage role memberships separately rather than part of the SSDT project since those will vary by environment and many organizations have separate process for managing database access security.
Variations to this have been asked. I have no problem searching a local directory with the below piece of code.
EXEC MASTER.sys.xp_dirtree 'C:\', 1, 1
When I switch the path to a network location the results are empty.
EXEC MASTER.sys.xp_dirtree '\\Server\Folder', 1, 1
I first thought maybe it was something to do with permissions. I added the SQL Server Service to the ACL list on the shared volume as well as the security group.
Any help or direction to point me in is greatly appreciated or even another way to get a list of files in a directory and sub directories.
[Edited]
The two things to look out for are:
Make certain that the Log On account for the SQL Server service (the service typically listed as "SQL Server (MSSQLSERVER)" in the Services list) has rights to that network share.
UPDATE
The problem ended up being that the O.P. was running the SQL Server service as a local system account. So, the O.P. created a domain account for SQL Server, assigned that new domain account as the Log On As account for the SQL Server service, and granted that domain account the proper NTFS permissions.
Please note that this might have also been fixable while keeping the SQL Service running as a local system account by adding the server itself that SQL Server is running on to the NTFS permissions. This should usually be possible by specifying the server name followed by a dollar sign ($). For example: MySqlServer01$. Of course, this then gives that NTFS permission to all services on that server that are running as a local system account, and this might not be desirable. Hence, it is still preferable to create a domain account for the SQL Server service to run as (which is a good practice in any case!).
It sounds like this has been done, so it should be tested by logging onto windows directly as that account and attempting to go to that specific network path.
Make sure that the Login in SQL Server that is executing xp_dirtree has "sysadmin" rights:
This can be done directly by adding the account to the sysadmin server role, or
Sign a stored procedure that runs xp_dirtree:
Create a certificate in [master]
Create a login based on that certificate
Add the certificate-based login to the sysadmin server role
Backup the certificate
Restore the certificate into whatever database has, or will have, the stored procedure that runs xp_dirtree
Sign the stored procedure that runs xp_dirtree, using ADD SIGNATURE and the certificate that was just restored
GRANT EXECUTE on that stored procedure to the user(s) and/or role(s) that should be executing this.
Just to have it stated, another option is to do away with xp_dirtree altogether and instead use SQLCLR. There is probably sample C# code on various blogs. There are also a few CodePlex projects that have file system functions and might also provide a pre-compiled assembly for those that don't want to deal with compiling. And, there is also the SQL# library that has several filesystem functions including File_GetDirectoryListing which is a TVF (meaning: you can use it in a SELECT statement with a WHERE condition rather than needing to dump all columns and all rows into a temp table first). It is also fully-streamed which means it is very fast, even for 100k or more files. Please note that the FILE_* functions are only in the Full version (i.e. not free) and I am the creator of SQL#, but it does handle this situation quite nicely.
I am looking for an easy way to create logins and associated users into SQL Azure.
The thing is that, with Azure, one first needs to create a login in the master database.
Based on that login I need a user to be created in a specific database (DATABASE1)
After that roles need to be assigned:
CREATE LOGIN login1 WITH password='<ProvidePassword>';
CREATE USER login1User FROM LOGIN login1;
EXEC sp_addrolemember 'dbmanager', 'login1User';
EXEC sp_addrolemember 'somerole';
The thing is, since one cannot use the USE command to switch databases, this seems to become quite a tedious task. More so because the number of accounts per database can range from ten to a few hundred, databases and users are being added all the time.
So I need a solution that can be easily reused.
I would like to have a script of some sorts (powershell?) that will read a file (containing username, password and databasename) and then create the appropriate logins, users and rights.....
Icing on the cake would be some sort of job that would regularly check whether there are (new) files present in a certain folder and if so, read those files and create new accounts where needed.
I must admit that I do have TSQL knowledge, basic programming knowledge but no Powershell experience at all. How would you advise that I go about? Is powershell the way to go? Or are there any other mechanisms I could use?
Greetings, Henro
As described here you sure can use Powershell from your desktop machine to connect to SQL Database to manage SQL Database account in similar way you would connect to other SQL Server.
PowerShell scripts can run on an on-premise computer and connect to SQL Database using System Management Objects or Data-tier Applications Framework object.
The very first step in this direction is to get your Powershell commands connecting to SQL Database and you can use this article to get upto here.
After that you just need to use the Powershell script to create users login and searching quickly I found this article and this one promising which includes a few more functionalities along with your objective. You may need to tweak script to make it working with SQL database.
Finally you can search internet to read data form a file (or XML to be better) and feed user info to your SQL Database script. If you have any issue in between step, open specific question and you will be helped.
I have an MS Access 2003 mdb and mdw which is connected to a SQL server backend. The tables are linked using a system DSN. I have a trigger on a SQL back end table which inserts a record into another back end audit table on insert, update, and delete. This all works well, but the trigger is using system_user to get the person making the record change, and the table is just recording the username the DSN is setup to use when that change is made in the linked Access table. If the DSN is set to use the generic sql username 'foo' and the MDW is using the user specific name 'bar', the audit table on the backend if recording all changes by all users as the user 'foo'. The users are logging in to the mdb with an mdw file, and I'd like to record the username from the mdw in the SQL backend. Is this at all possible?
From Access VBA you can use the CurrentUser() function to return the MDW user name. You need to find a way to tell SQL Server about that name. If you're building and submitting the DML statements from Access, you could add the CurrentUser value as a field expression.
I'm curious about using both Access user level security and SQL Server authentication. At first blush it sounds like a "belt and suspenders" approach ... except that SQL Server can be a very effective belt, while Access user level security is a comparatively ineffective set of suspenders. I would question what benefit ULS adds to your application.
Consider discarding ULS and switching to Windows Authentication for SQL server. That could be a simpler, cleaner, and more secure approach.
I bet ##spid in the trigger works because it is executed by the process doing the DML.
Just be aware that this may not always be reliable because sometimes Access opens additional connections without having any way of running your special code to log the user against the spid in use.
Update
Have you considered using the CONTEXT_INFO variable that is specific to each SQL Server session?
DECLARE #Info varbinary(30)
SET #Info = Convert(varbinary(30), 'My Username')
SET CONTEXT_INFO #Info
SELECT Left(Convert(varchar(30), CONTEXT_INFO()), CharIndex(0x0, CONTEXT_INFO()) - 1)
This may mean hitting a table behind the scenes anyway, but it's surely going to be faster than doing it yourself.
We use merge replication in one of our programs and I would like to allow our users to force synchronization of their laptops with the publisher on an as-needed basis (we are using push subscriptions). I got this working using REPLMERG.EXE (see my previous question).
However, when the users trid to run the script they received the following error message:
Only members of the sysadmin or db_owner roles can perform this operation.
...
exec sp_MSreplcheck_subscribe
...
If I add the users' group login as a db_owner on their local subscription database then the script works correctly. The problem is that they also end up with full access to every table in their local database, which is not something we can live with.
Allowing users in a merge replication topology to synchronize their local push subscriptions on-demand without giving them full-blown control of the db seems like a pretty straightforward use case, but I can't get it working.
From Replication Agent Security Model:
Merge Agent for a pull subscription
The Windows account under which the
agent runs is used when it makes
connections to the Subscriber. This
account must at minimum be a member of the db_owner fixed database role in
the subscription database.
The account that is used to connect to the Publisher and Distributor must:
Be a member of the PAL.
Be a login associated with a user in the publication database.
Be a login associated with a user in the distribution database. The
user can be the Guest user.
Have read permissions on the snapshot share.
Therefore is a documented requirement of Merge replication that the account running the replication agent (replmerge.exe) be member of db_owner. If you this does not work for you situation, then Merge replication is not the right technology to use, since it has a requirement you cannot fill.
Now int theory an application can do whatever REPLMERGE does from another application, and you can leverage the power of code signing to run a set of wrapper procedures that are granted dbo privileges via code signing, thus not needing the elevated login, but that's just theory since the replication procedures are not exactly easy to use nor are they documented at the level one needs to re-implement the agents...
The suscriber must have the right to replicate data definition instructions sent on the publisher. Some of these instructions might even lead to the reinitialisation of the subscriber, which requires the right to a drop\recreate the corresponding database. In these conditions, security requirements as set by Microsoft sound quite sensible.
As Remus and Philippe have pointed out, db_owner on the subscription db is a hard requirement for synchronizing a merge push subscription. However, we really wanted to allow our users to synchronize their own laptop without giving them full db_owner rights to the database.
Our solution was to enable mixed mode authentication on the subscribers and add a SQL Server user whose sole purpose was to enable our end users to synchronize their laptops. The SQL Server user, 'syncuser', was given the db_owner role on the local subscription database. Then, when we called replmerg.exe from within the program, we specified the following switches:
-SubscriberSecurityMode 0 -SubscriberLogin syncuser -SubscriberPassword 4w3$0m3_P4$$w0Rd