SQL Server: Modifying the "Application Name" property for auditing purposes - sql-server

As we do not implement the users of our applications as users in SQL server, when the application server connects to a database each application always uses the same credentials to attach to each database.
This presents an auditing problem. Using triggers, we want to store every update, insert and delete and attribute each to a particular user. One possible solution is to add an "updated by user" column to every table and update this every time. This means a new column on every table and a new parameter on every stored procedure. It also means you can only do soft deletes.
Instead of this I propose using the Application Name property of the connection string and reading this with the App_Name() property inside the trigger. I tested this with a simple app and it seems to work (the format could be as so: App=MyApp|User=100).
The question for you guys is, is this a bad idea and do you have a better one?

I use SET CONTEXT_INFO for this. It's just what you need.

It certainly seems like a feasible solution, although you'll need to inject the username into the connection string each time your application loads. Note that this solution probably wouldn't work with a web application, as your connection string will be different each time, which could lead to huge connection pooling issues.
Another option is to retrieve the hostname/IP address (SELECT host_name() ) and store that instead.
You wouldn't necessarily need a new parameter on each stored procedure, as you can modify each stored procedure (or the trigger) to automatically insert the App_Name/Hostname.
A potential drawback is that any modifications performed via Management Studio won't have the custom App_Name, and you'll be left with "Microsoft Management Studio" as the user.

We use the Application Name property to control auditing triggers and have not seen any problems using it, and haven't noticed any speed issues (though in our case, we're specifically not auditing for certain applications, so its hard to measure how much time not doing something takes :))

Related

Azure SQL: How to be notified if someone exports the database?

I run a system based around an Azure SQL Database.
A few different team members need to have read access to this database to perform support and management tasks.
However, I am concerned that by having access to the database, one of them may - with the best of intentions - export the database and manage the backup carelessly, resulting in a data breach.
How can I get Azure to notify me if somebody backs up the database (or downloads more than X million rows, maybe?) These people need to have database access, I would just like to know if they use it in a way that could cause a security risk for the platform.
You can use Extended Events for this.
To set it up on Azure you can follow this tutorial.
For your case
You create a session
You Select the rpc_completed (docs) event and click configure
In the Global Fields tab you can select the fields you want to keep track of. I.e.: Username, sql_text, session_id, database_name, client_*
In the Filter tab you can select a filter condition. In your case row_count would be appropriate.
When malicious users are smart, and retrieve small numbers of rows and page them this will go undetected. So a second filter could be Querys without WHERE clauses or a different approach based on your case.
When extended events are setup to write to blobstorage. You would have a different process (Azure Function, Runbook, ...) that would inspect the result and alert you.
Extended events are moslty used for troubleshooting, they replace SQL profiler. So turning it on a production server may have a performance impact.

How to add a column to a table from within lightswitch

I have a SQL Server Database and it is a requirement for my lightswitch app that the administrator be able to add new columns to certain tables. Is that even possible? The only way I could think to do it is to write an "ALTER" stored procedure in the database and call it from lightswitch, but that seems a little messy. Any Ideas?
Although you'll be able to find a way to physically add a new column to a table after an application has been published, LightSwitch is not going to like it. You may even find that the application refuses to run.
For an attached database, the model that LightSwitch creates for it can only be updated by running the Update Data Source command, which can only be done by the developer at design-time. And if the database in the intrinsic database, it also can only be changed at design-time.
So the short answer to "Is that even possible?" is "no".
An ALTER stored procedure would probably be the best way to achieve what you are talking about, but I wouldn't recommend it.
How are you then going to store and retrieve data from these columns? What happens when you start to get column name collisions between tables?
It might be better if you give us a higher level description of what you are trying to achieve, but taking a guess I would suggest you look at the entity-attribute-value pattern for storing arbitrary user data.

SQL Server 2008 - DB monitoring requirement , what is the best way to do it

I have a monitoring requirement where I need to keep checking table data every hour and if it reaches certain counts higher then previous hour, I need to notify certain users. I also need to create some kind of user interface so that user can track counts according to day/weekly too .
I am developer and don't know much about DB except basic use like writing stored procedure / creating table etc. and I searched on internet and found so many options but don't know what will be the best option to go through and if it will fulfill my purpose.
First one is to use SQL Server Reporting Services Enterprise edition. I am not sure if there is a way to compare data from previous one, write logic in it and send email notification in this.
Second one is to write trigger on table and write logic in stored procedure and fire this trigger with stored procedure logic if its possible. And trigger will send email. But then I need to create some kind of interface in application level .
Can someone please help if there is any other better way to do this or what is the best approach in between these two.
Try and read up on the Data Collector:
http://msdn.microsoft.com/en-us/library/bb677248%28v=sql.105%29.aspx

Tools to update tables in SQL server 2000/2005

Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.

Preparing to move to a single database

We have an application that has 1000+ databases and 600+ sprocs. Each database represents a different client.
Problem: We need to move this to a single database while creating as little effect on the ui as possible, meaning dont change all the sproc signatures at 1 time.
The connection string currently sets the database attribute, a proposal is to move that to the user attribute. This attribute (using SYSTEM_USER) could be used to determine the site identifier which would be used on the where clause.
The above would not be final solution, but allows us to make changes to the sproc signature at a slow controlled pace. Once all are done we can correct the connstring and get some connection pooling.
Are there any limitation to the number of logins/users that we can have on sqlserver 2005/8. Or has anyone been down this path that could shed some light on a better option.
See my answer here
Ideas for Combining Thousand Databases into One Database
Sounds like you two are working the same project. YOu will need to change every proc before you can move to one datbase or each client will see the others' data.
As for the number of logins on SQL Server 2005 / 08 - I don't think anyone has ever run into a hard limit here. A few thousand will NOT be any problem at all.
What you could consider for this scenario might be one schema inside your single DB per customer, e.g. customer "Miller" has a "miller" schema, with its objects inside, and customer "Brown" will have a "brown" schema.
And contrary to what HLGEM just responded - no, customers won't see each others data, if you specify proper permissions - each customer (and its users) into its own schema only - should work just fine.
Marc
You might also consider setting a distinctive application name in the connection string rather than using a distinctive user, which you can get into your where clause using APP_NAME(). I'm sure that SQL Server won't have a problem with thousands of logins, but you may prefer not to have to create them.

Resources