Specify service_broker_guid instead of getting random one from NEW_BROKER - sql-server

In SQL 2008, is there a way to specify a service_broker_guid instead of simply taking whatever GUID is given to you by:
ALTER DATABASE MyDB SET NEW_BROKER
Our current (broken, in my opinion) release methodology is to restore two databases which have a codependent relationship. One is a "source" database, and one is a star-schema BI database. Part of the regression test plan is to restore backups of both databases in a "gold" state across different servers and even on the same server.
We generally do not include BROKER_INSTANCE variables on our routes because most places, we do not need them (i.e. the combination of SERVICE_NAME and ADDRESS is enough to guarantee delivery). However, when we have 2 databases running on the same instance both with broker enabled, one of these will need a new BROKER_ID. In addition, all routes to these databases will now require a BROKER_INSTANCE qualifier, since there are 2 SERVICE_NAME's on the same ADDRESS.
We use Visual Studio Database Professional to generate our build output scripts, and there's simply no simple way to include a BROKER_INSTANCE as part of it's SQLCMD variable replacement technique, unless you know it before hand.

No. NEW_BROKER would generate a new guid.
There is no way to use a specific guid and that is very much intentional. If you explain what is the underlying problem that is making you ask this question, perhaps we can work toward a solution to that problem.
After your edit.
The broker_instance, as well as routing information, is considered runtime, deployment specific information. As such, it was not designed to accept fixed, predetermined values, which is what a VS GDR project or a set of SQLCMD scripts would like to use. Besides, the broker_instance_id is really meant to be a unique, database instance specific value, and allowing users to specify their own would quickly result in duplicates, which would confuse the heck out of conversation endpoints trying to exchange messages.
The problem you're facing though is a legitimate one. How to automate deployment of routing information (and it's associate problems of automating the deployment and exchange of certificates, configuring users w/o login and granting appropiate permissions, configuration of remote service bindings and service broker transport endpoints). There simply is no Wizard to do this. Quest has a set of tools that handle this actually.
Once upon a time I made a tool, called ssbslm.exe, that automated this entire process and was designed to be usable from scripts. This tool did everything to set up the routes, certificates and endpoints between two arbitrary services. While this tool is no longer available (long and boring story why that is the case), the gist of this story is that is not that hard to write one. Took me few days back in the day.

Related

Add a security layer to our SQL Servers (currently accesible from remote sql management studio)

We have a big system running with thousands of users (some from android apps, other from the web app, etc.).
The system is distributed, with databases in two locations (within the same country). In one location there are 5 servers in the same network, and each one has a copy of the database (via replication).
Among the software developers, a few have direct access to the production databases. Sometimes due to technical support requested by users to modify some operations not possible from the system itself, the developers/support team have to access the database directly and modify some records.
We know this is not the ideal manner of working. But it's been like this since years.
Recently we have found a few problems. One day one person updated hundreds of records from a table by mistake.
Since then we are analyzing how to improve this access.
We are looking for some way of improving the security. We would like to have a two-phase authentication system in place. Something that asks the user for two passwords when accessing from Sql Server Management Studio...
Is that possible? Or is there any other approach we can use to improve the security but still allow devs/support team to access the production database when necessary?
Users also (currenty) have access via remote desktop to all servers.
At least we would like to KNOW when this access is being done.
Make access to PROD read only for those users. Allow them to write their scripts and then submit them for review at a minimum and testing if possible like any other deployable. Then follow standard deployment processes with someone who has access.
If my other answer isn't workable and these updates are always the same kind of fixes...you could create support stored procs maybe to do the fixes and only give permission on the procs...but this is highly dependent on the commonality of fixes being made and less preferable to my other answer.
I haven't used it myself but EXECUTE AS might let you give the users read-only permission while the procs would execute under credentials with higher access.

Rebuilding an unstable tool from scratch (Currently Access based - can go anywhere)

I have inherited a custom built tool that is poorly designed and unstable, and I have a great opportunity to rebuild it from scratch. This is an internal tool only that works almost entirely in Access, and its purpose is to provide higher detail on parts that cost the company over a certain dollar amount.
How it works:
1) The raw data (new part numbers) gets pulled nightly from the EDW via macros in Access.
2) The same macros then join two tables (part numbers from one, names from another). Any part under a certain dollar amount is removed, and the new data is appended to the existing Access database.
3) During the day employees can then open a custom Access form to add more details about the part. Different questions are asked depending on the part category.
4) The completed form is forwarded to management, and the information entered is retained in the Access database – it does not write back to the EDW.
5) Managers can also pull some basic reports from the database, based on overall costs.
The problems:
1) Currently everyone has to have Access installed on their work stations, and whenever there is an update the new database gets pushed to their stations. This is not considered an ideal situation by management or IT.
2) If anyone has left the tool open accidentally at the end of the day the database is locked out, therefore the macros cannot run and the tool cannot be updated with new part numbers.
3) If the tool cannot update for a few days in a row the database can become corrupted. We can restore from the last good backup, in the past this has resulted in the loss of multiple days of work.
Ideally we want to take the tool completely out of Access. I am building a SharePoint site that can host the tool, which (if I can get it right) will eliminate the need of Access on end-user stations with a database push. However the SharePoint form would need read/write ability.
The big question is: How do I build this?
I have a completely open path of possibilities – I can design it work any way I want, using any tools or platform I want, as long as it works. It does not have to update automatically, as I already run a number of SQL scripts at the start of my day and adding one more is inconsequential.
The resources I have at my immediate disposal are: SharePoint (with designer), Access, Toad, and SQL Server. The database can be hosted on a shared network drive.
I am a recent college graduate with basic SQL knowledge. I have about a year to produce a final product, but would like to get it up and running far sooner if possible.
Any advice on what direction to pursue would be very helpful, thank you.
Caveat: I've never worked with SQL Server, so I don't know all of it's capabilities (I'm an Oracle developer).
What I'd do in your situation is something like the following (although not necessarily in this exact order):
Get a SQL Server database set up to host your tables.
Create the tables etc
Migrate test data across (I'm assuming you have a dev/uat/test environment for your current system! If you haven't, make sure you set up at least a separate test environment to prod for your new db!)
Write stored procs to do the work for adding new parts, updating existing data, etc etc
Set up an automated job on the db (I'm assuming SQL Server can do this!) to do the overnight processing.
Create a separate db user with the necessary permissions to call the stored procedures
Get your frontend to call the stored procs with the relevant parameters using the db user you created in step 6 to connect to the db.
You'd also have to think about transaction control to try and mitigate the case where users go home at the end of the day without committing their work - Does the db handle the commits/rollbacks or does Sharepoint?
Once you've worked out everything in your test environment, it's then a case of creating the prod db, users and objects, and then working out the best way of migrating the prod data across.
Good luck.
Don't forget to get backups for the new db set up as well.

SQL Server - robust protection of client data (multi-tenancy)

We are considering using a single SQL Server database to store data for multiple clients. We feel having all the data in one database could make things more manageable than a "separate db per client" setup.
The biggest concern we have is accidental access to the wrong client. It would be very, very bad if we were to ever accidentally show one client's data to another client. We perform lots of queries, and are afraid of a scenario where someone says "write me a query of this and this to go show the client for the meeting in 15 minutes." If someone is careless and omits the WHERE clause that filters for the correct client then we would be in serious trouble. Is there a robust setup or design pattern for SQL Server such that it makes it impossible (or at least very difficult) to accidently pull the wrong client's data from a single "global" database?
To be clear, this is NOT a database that the clients use directly or via apps (yet). We are talking about a database accessed by several of our programmers and we are afraid of screwing up ourselves.
At the very minimum, you should put the client data in separate schemas. In SQL Server, schemas are the unit of authorization. Only people authorized for a given client should be able to see that client's data. In addition to other protections, you should be using the built-in authorization capabilities of the database.
Right now, it sounds like you are in a situation where a very small group of people are the ones accessing all the data for everyone. Well, if you are successful, then you will probably need more people in the future. In fact, you might be giving some clients direct access to the data. If it is their data, they will want apps running on it.
My best advice, if you are planning on growing, is to place each client's data in a separate database. I would architect the system so this database can be on a remote server. If it needs to synchronize with common data, then develop a replication strategy for moving that data around.
You may think it is bad to have one client see another client's data. From the business perspective, this is deadly -- like "company goes out of business, no job" deadly. Your clients are probably more concerned about such confidentiality than you are. And, an architecture that ensures protection will make them more comfortable.
Multi-Tenant Data Architecture
http://msdn.microsoft.com/en-us/library/aa479086.aspx
here's what we do (mysql unfortunately):
"tenant" column in each table
tables are in one schema [1]
views are in another schema (for easier security and naming). view must not include tenant column. view does a WHERE on the tenant based on current user
tenant value is set by trigger on insert, based on the user
Assuming that all your DDL is in .sql files under source control (which it should be), then having many databases or schemas is not so tough.
[1] a schema in mysql is called a 'database'
You could set up one inline table valued function for each table that takes a required parameter #customerID and filters that particular table to the data of this customer. If the entire app were to use only these TVP's the app would be safe by construction.
There might be some performance implications. The exact numbers depend on the schema and queries. They can be zero, however, as inline TVP's are inlined and optimized together with the rest of the query.
You can limit access to data only via storedprocedures with obligatory customerid parameter.
If you allow you IT build views sooner or later someone forget this where clause as you said.
But a schema per client with already prefiltered views will enable selfservice and extra Brings value i guess.

How to protect a database from the Server Administrator in Sql Server

We have a requirement from a client to protect the database our application uses, even from their local administrators (Auditors just gave them that requirement).
In their requirement, protecting the data means that the Sql Server admin cannot read, nor modify sensitive data stored in tables.
We could do that with Encryption in Sql Server 2005, but that would interfere with our third party ORM, and it has other cons, like indexing, etc.
In Sql Server 2008 we could use TDE, but I understand that this solution doesn't protect against a user with Sql Server admin rights to query the database.
Is there any best practice or known solution to this problem?
This problem could be similar to the one of having an application hosted by a host provider, and you want to protect the data from the host admins.
We can use Sql Server 2005 or 2008.
This has been asked a lot in the last few weeks. The answers usually boil down to:
(
a) If you don't control the application you are doomed to trust the DBA
or
b) If you do control the application you can encrypt everything with a key only known to the application, and decrypt on the way out. It'll hurt performance a bit (or a lot) though, that's why TDE exists. A variant of this to prevent tampering is to use a cryptographic hash of the values in the column, checking them upon application access.
)
and
c) Do extensive auditing, so you can control what are your admins doing.
I might have salary information in my tables, and I don't want my trusted dba's to see.
Faced with the same problem we have narrowed are options to:
1- Encrypt outside SQLServer, before inserts and updates and decrypt after selects. ie: Using .net encryption.
Downside: You loose some indexing and searching capabilities, cannot use like and betweens.
2- Use third party tools (at io level) that block crud to the database unless a password is provided. ie: www.Blockkk.com
Downside: You will need to trust a third party tool installed in your server. It might not keep up with SQL Server patches, etc...
3- Use an Auditing Solution that will keep track of selects, inserts, deletes, etc... And will notify (by email or event log)if violations occurred. A sample violation could be a dba running a select on your Salaries table. then fire the dba and change everyone salaries.
Auditors always ask for this, like they ask for other things that can never be done.
What you need to do is put it into risk-mitigation terms and show what controls you do have (tracking when users are elevated to administrators, what they did and that they were de-elevated afterwards) instead of in absolutes.
I once had a boss ask for total system redundancy without defining what he meant or how much he was willing to pay and sacrifice.
I think the right solution would be to only allow trusted people be DBA's.
It is implicit in being DBA, that you have full access, so in my opinion, your auditor should demand that you have procedures for restricting who has DBA access.
That way you work with the system through processes in stead of working aginst the system (ie. sql server).
To have person you don't trust be DBA would be nuts...
If you don't want any people in the admin group on the server to be able to access the database, then remove the "BUILTIN\Administrators" user on the server.
However, make sure you have another user that is a sysadmin on the server!
another way i heard that a company has implemented but i haven't seen it is:
there's a government body which issues kind of timestamped certificate.
each db change is sent to async queue and is timestamped with this certificate and is stored off site. this way noone can delete anything without breaking the timestamp chain.
i don't know how exactly this works on a deeper level.

Step-by-step instructions for updating an (SQL Server) database?

Just a question about best-practices when upgrading an existing database. Assuming there will be all kinds of modifications to the data itself, the structure, the relations, additional columns, disappearing columns and whatever more.
My problem is a simple one. I'm working on a project that will use SQL Server. No problem there, since I'm enough of an expert to handle this. But this project will be upgraded later on and I need to specify a protocol that needs to be followed by the upgrade mechanism. Basically, this protocol needs to be followed when creating upgrade scripts...
Right now, I have these simple steps:
Add the new columns to the tables.
Add constraints to the new columns.
Add new tables.
Drop constraints where needed.
Drop columns that need to be removed.
Drop tables that need to be removed.
Somehow, this list feels incomplete. Is there a more extended list somewhere describing the proper steps which needs to be followed during an upgrade?
Also, is it always possible to do a complete upgrade within a single database transaction (with SQL Server) or are there breakpoints that need to be included within the protocol where one transaction should end and another one starts?While automated tools will provide a nice, automated solution, I still can't really use them. The development team working on this system has 4 developers, each with their own database on their local system. Every developer keeps track of their own updates to the structure and keeps track of them by generating both an Upgrade and Downgrade script for his own modifications, both for structural changes and data changes. These scripts can then be used by the other developers to keep their own system up-to-date. Whenever the system is going to be released, those scripts are all merged into one big script.
The system does not include any stored procedures or other "special" features. The database is just that: a data storage with just tables and relations between them. No roles, no users, no stored procedures, no triggers, no complex datatypes...The DB is used by an application where users work from 9-to-5 so shutting down can be done easily, including upgrades for the clients. We also add a version number to the database and applications will check if they're linked to the correct database version.
During development, all developers use their own database instance, which they can fully control. Since we're not the ones who use the application, we tend to develop for the Express edition, not any more expensive one. To be honest, we don't develop our application to support a lot of users, but we'll inform our users that since it uses SQL Server, they could install the system on a bigger SQL Server platform, according to their own needs. They will need their own DBA for this, though. We do have a bigger SQL Server available for ourselves, which we also use for our own web interface, but this server is located in a special dataserver where it is being maintained for us, not by us.
The project previously used MS Access for it's data storage and was intended for single-user development, but as it turned out, many users still decided to share their databases and this had shown that the datamodel itself is reliable enough for multi-user environments. So we migrated to SQL Server to support smaller offices with 3 or more users and some big organisation who will have 500 or more users at the same time.
Since we need to keep the cost of the software low, we don't have a big budget to spend on expensive tools or a more expensive server.
Check out Red-Gate's SQL Compare (structure comparison), SQL Data Compare (data comparison), and SQL Packager (for packaging up updates scripts into a C# project or a .NET executable).
They provide a nice, clean, fully functional and easy-to-use solution for all your database upgrade needs. They're well worth their license fees - that pays for itself in a few weeks or months.
Highly recommended!
In my opinion, it's an absolute bear doing these manually. For Microsoft SQL Server, I'd recommend using the Database editiion of Team System, since it includes complete source control capabilities for your database, and can automatically build your scripts for upgrading/downgrading versions.
Another option is SQLCompare with Redgate, which can also handle these kinds of upgrades/downgrades, and will result in a very nice SQL script. I've used both, and keeping the historic scripts has helped us troubleshoot issues and resolve many a mystery.
If you are working with a manual script as above, don't forget to also account for SP changes in your scripts. Also, any hand-edited script should be able to be executed multiple times on a database - i.e. if your script includes a table creation or drop, be sure to check for existance first, otherwise your script will fail if executed back to back.
Again, while it's possible to build a manual protocol I'd still fall back on using one of the purpose-built tools out there, and both Team System and SQL Compare will be able to output scripts that you could include as part of an installation/upgrade package.
With database updates I always believe it should be all or nothing. If any of the DB updates fail your application will be left in an unknown state that could be harmful to the data so I think it is best practice to either apply them all or none (1 transaction around them all).
I also like to backup the database before applying updates so that if anything does go wrong the database can be rolled back (this has saved me numerous times when working with live data).
Hope this helps.
Best practices for upgrading a production database schema actually look pretty bad on the surface. Unless you can completely shut down your system for the upgrade, which is often not possible, your changes all need to be backwards compatible. If you have many clients accessing the database, you can't update them all simultaneously, so any schema changes you make need to allow old code to run.
That means never renaming a column, and making all new columns nullable. This doesn't mean you leave it like that forever. You write two scripts, one for the initial change, which is backwards compatible, then another to clean things up after all clients have been updated.
Automated tools are great for validation of schemas, but they are not so good when it comes to actually modifying a complex system. You should break your changes up into many small, discrete change scripts so each can be run manually. If there's a failure, it's easier to pinpoint the cause and fix it. Basically, each feature gets its own script. Give each a unique name and then store that name in the database itself when you run the script so you can query the database to find out what's been run and what hasn't. This is invaluable when you have instances on developer's machines, test servers, production, etc.

Resources