Common function / stored procedures for all databases - sql-server

We have a database server and it has about 10 databases.
I would like to create some functions / stored procedures which can be used in all databases.
For example, we can use sp_executesql in any database.
We have some requirements like that (getting current academic year, financial year, etc...)
Is it doable?

As others have suggested, you could put objects into the master database, but Microsoft explicitly recommends that you should not do that. I find that solution to be rather risky anyway, because the master database is 'owned' by the system, not by you, so there are no guarantees that it will continue to behave in the same way in the future.
Instead, I would consider this to be primarily a deployment issue. There are (at least) two strategies you could use:
Deploy the objects to every database
Deploy them to one 'reference' database that is only used for shared objects and create synonyms in the other databases
The second option is perhaps the better one, because if your functions use tables (e.g. you use a calendar table to get the academic year, which is much easier than calculating it) then you would have to create the same tables in every database too. By using synonyms, you only have to maintain one set of tables.
For the actual deployment, it's straightforward to use scripting to do manage the objects, because you just need a list of databases to connect to and run each DDL script against. You can do that using batch files and SQLCMD (perhaps with SQLCMD variables in your .sql scripts), or drive it from PowerShell or any other language that you prefer.

Depending upon what the SP actually does, you want to create the procedure in master, name it with sp_ and mark it as a system procedure:
http://weblogs.sqlteam.com/mladenp/archive/2007/01/18/58287.aspx

A couple of options:
You can use a system stored procedure as Cade says. I've done this in the past and it works ok. One warning on this is that the sp_MS_marksystemobject procedure is undocumented, which may mean that it could vanish or change without warning in future SQL versions. Thinking back I think there were other problems using this approach with functions though.
Another approach is to use standardized procedure and functions, and roll them out across your databases using sp_MSforeachdb to run code against every database. If you need to run against only your 10 database you can take copy the code in this procedure and modify it to check that a database matches your schema before running the code (or you can write your own version that does a similar thing).

Related

Python - extracting a SQL Server database schema to a file

Often I need to extract the complete schema of an existing SQL Server DB to a file. I need to cover every object -- tables, views, functions, SPs, UDDTs, triggers, etc. The purpose is so that I can then use a file-diff utility to compare that schema to a baseline reference.
Normally I use Enterprise Manager or Management Studio to script out the DB objects and then concatenate those files to make one big file in a consistent predictable order. I was wondering whether there's a way to accomplish this task in Python? Obviously it'd take an additional package, but having looked at a few (pyodbc, SQLAlchemy, SQLObject), none of them seem really suited to this use case.
If you can connect to SQL Server and run queries in Python then yes – it’s possible but it will take a lot of effort and testing to get it to work correctly.
Idea is to use system tables to get details about each object and then generate DDL statements based on this. Some if not all DDL statements already exist in sys.syscomments table.
Start off by executing and examining this in SSMS before you start working in Python.
select *
from sys.tables
select *
from sys.all_columns
select *
from sys.views
select *
from sys.syscomments
All system tables documentation from MSDN.
I've used this PowerShell strategy in the past. Obviously, that isn't Python, but it is a script you can write then execute from within Python. Give this article a read as it may be your easiest (and cheapest) solution: http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/04/use-powershell-to-script-sql-database-objects.aspx
As a disclaimer, I was only exporting stored procedures, not every single object.

Database cleanup

I inherited a SQL server database that is not well formatted. ( some consulting company came in to do the project and left without completing it)
the main issues I have with this database are:
Data types: a lot of tinyint and text types.
Tables are not normalized: some of the keys are names instead of seq ids.
A lot of tables that I am not sure are being used
a lot of stored procedures that i am not sure are being used
Badly named tables and stored procs
I also inherited the asp.net application that runs against this database.
I would like to clean this database up. I understand that changing the datatypes will have to happen at each table. for getting rid of all the extra tables and stored procs. what is the easiest way to do so.
any other tips to make it cleaner and smaller is appreciated.
I want to also mention that I have RedGate tools installed.( if that helps).
Thank you
Check out the Sql Server Data Tools they allow to create a project from a live database. Some of the things you can do in there is right click 'Find Usages' for the tables, views and functions.
So long as the previous developer used stored procedures and views rather than querying directly, it should find references to your project that way, without killing your project.
Also, for finding stored procedures that are not used, put in some basic logging at the top of each stored procedure in your application, after X amount of days, those that haven't been logged in your table are likely safe to remove, else a tedious search through your .NET code will find them.

Creating a New Database from Within a Stored Procedure

Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

Stored Procedures MSSQL2005

If you have a lot of Stored Procedures and you change the name of a column of a table, is there a way to check which Stored Procedures won't work any longer?
Update: I've read some of the answers and it's clear to me that there's is no easy way to do this. Would it be easier to move away from Stored Procedures?
I'm a big fan of SysComments for this:
SELECT DISTINCT Object_Name(ID)
FROM SysComments
WHERE text LIKE '%Table%'
AND text LIKE '%Column%'
There's a book-style answer to this, and a real-world answer.
First, for the book answer, you can use sp_depends to see what other stored procs reference the table (not the individual column) and then examine those to see if they reference the table:
http://msdn.microsoft.com/en-us/library/ms189487.aspx
The real-world answer, though, is that it doesn't work in a lot of cases:
Dynamic SQL strings: if you're building strings dynamically, either in a stored proc or in your application code, and then executing that string, SQL Server has no way of knowing what your code is doing. You may have the column name hard-coded in your code, and that'll break.
Embedded T-SQL code: if you've got code in your application (not in SQL Server) then nothing in the SQL Server side will detect it.
Another option is to use SQL Server Profiler to capture a trace of all activity on the server, then search through the captured queries for the field name you want. It's not a good idea on a production server, because the profile incurs some overhead, but it does work - most of the time. Where it will break is if your application does a "SELECT *", and then in your application, you're expecting a specific field name to come back as part of that result set.
You're probably beginning to get the picture that there's no simple, straightforward way to do this.
While this will take the most work, the best way to ensure that everything works is to write integration tests.
Integration tests are just like unit tests, except in this case they would integrate with the database. It would take some effort, but you could easily write tests that exercise each stored procedure to ensure it executes w/o error.
In the simplest case it would just execute the sp and make sure there is no error and not be concerned about the actual results. If your tests just executed sp's w/o checking results you could write a lot of this genericly.
To do this you would need a database to execute against. While you could setup the database and deploy your stored procs manually, the best way would be to use continuous integration to automatically get the latest code (database DDL, stored procs, tests) from your source control system, build your database, and execute your tests. This would happen every time you committed changes to source control.
Yes it seems like a lot of work. It's a lot of work, but the payoff is also big. The ability to ensure that your changes don't break anything allows you to move your product forward faster with a better quality.
Take a look at NUnit and NDbUnit
I'm sure there are more elegant ways to address this, but if the database isn't too complex, here's a quick and dirty way:
Select all the sprocs and script to a query window.
Search for the old column name.
If you are only interested in finding the column usage in the stored procedure probably the best way will be do do a brute force search for the column name in the definition column sys.sql_modules table - which stores the definition for the stored procedures/functions.

Resources