I need to put versions onto a SQL Server 2005 database and have these accessible from a .NET Application. What I was thinking is using an Extended Properties on the Database with a name of 'version' and of course the value would be the version of the database. I can then use SQL to get at this. My question is does this sound like a good plan or is there a better way for adding versions to a SQL Server database?
Lets assume I am unable to use a table for holding the Metadata.
I do this:
Create a schema table:
CREATE TABLE [dbo].[SchemaVersion](
[Major] [int] NOT NULL,
[Minor] [int] NOT NULL,
[Build] [int] NOT NULL,
[Revision] [int] NOT NULL,
[Applied] [datetime] NOT NULL,
[Comment] [text] NULL)
Update Schema:
INSERT INTO SchemaVersion(Major, Minor, Build, Revision, Applied, Comment)
VALUES (1, 9, 1, 0, getdate(), 'Add Table to track pay status')
Get database Schema Version:
SELECT TOP 1 Major, Minor, Build from SchemaVersion
ORDER BY Major DESC, Minor DESC, Build DESC, Revision DESC
Adapted from what I read on Coding Horror
We use the Extended Properties as you described it and it works really well.
I think having a table is overkill. If I want to track the differences in my databases I use source control and keep all the db generation scripts in it.
I've also used some ER diagram tools to help me keep track of changes in DB versions. This was outside the actual application but it allowed me to quickly see what changed.
I think it was CASEStudio, or something like that.
If I understand your question right (differentiating between internal database versions, like application build numbers), you could have some sort of SYSVERSION table that held a single row of data with this info.
Easier to query.
Could also contain multiple columns of useful info, or multiple rows that represent different times that copy of the database was upgraded.
Update: Well, if you can't use a table to hold the metadata, then either external info of some sort (an INFO file on the hard drive?) or extended properties would be the way to go.
I still like the table idea, though :) You could always use security to only make it accessable through a custom stored proc get_ db_version or something.
The best way to do is to have 2 procedures: one header to control what is being inserted and validations a footer to insert the data if the release is good or not. The body will contain your scripts.
You need a wrapper that will encapsulate your script and record all the info: as far release, script number been applied, applyby, applydate date, release outcome "failed or succeeded".
I am using dedicated table similar to Matt's solution. In addition to that, database alters must check current version before applying any changes to the schema. If current version is smaller than expected, then the script terminates with fatal error. If current version is larger than expected, then script skips current step because that step has already been performed sometimes in the past.
Here is the complete solution with examples and conventions in writing database alter scripts: How to Maintain SQL Server Database Schema Version
Related
Problem: Junior SQL dev here, working with a SQL Server database where we have many functions that use temp tables to pull data from various tables to populate Crystal reports etc. We had an issue where a user action in our client caused a string to overflow the defined NVARCHAR(100) character limit of the column. As a quick fix, one of our seniors decided on a schema change to set the column definition to NVARCHAR(255), instead of fixing the issue of the the string getting too long. Now, we have lots of these table based functions that are using temp tables referencing the column in question but the temp table variable is defined as 100 instead of 255.
Question: Is there an easy way to find and update all of these functions? Some functions might not reference the table/column in question at all, but some heavily rely on this data to feed reports etc. I know I can right click a table and select "View Dependencies" in SQL Server Management Studio, but this seems very tedious to have to go through all of them and then update our master schema before deploying it to all customers.
I thought about a find and replace if there is a way to script or export the functions but I fear a problem I will run into is one variable in one function might be declared as TransItemDescription NVARCHAR(100) and one might be TransItemDesc NVARCHAR (100). I've heard of people avoiding temp tables maybe because of these issues so maybe there is just bad database design here?
Thus far I've been going through them one at a time using "View Dependencies" in SSMS.
I think the best solution would be to script out the whole database into a single script from SSMS. Then use Notepad++ (or equivalent) to either find:
All occurrences of NVARCHAR(100)
All occurrences of the variable name, e.g. TransItemDescription, TransItemDesc.
Once you have found all occurrences then make a list of all of the functions to be fixed. Then you would still need to do a manual fix to all functions, but once complete the issue should be totally resolved.
i wanted to create via Excel or Oracle a database for a Storage room that is filled with all kinds of Computer parts and stuff.
I never created something like that, so i wanted to know if you could help me out giving me an advice how to create a database for a beginner
It should be possible to insert and remove parts or even update them
Hope my question is readable and understandable.
Thanks
A simple option to do that - not only the table so that you could write your own DML statements (to insert, update or delete rows) - but to create a nice application - is to use Oracle Application Express (Apex).
Depending on database version you use, it might already be installed by default. If not, ask your DBA to install it.
Alternatively, create a free account on apex.oracle.com; you'll get limited space (more than enough to do what you want to do).
In Application Builder, use the Excel file you have as a "source" which will then be used by Apex's wizard to create a table in the database, as well as application, true GUI which works and looks just fine.
If you don't have anything at all, not even an Excel file, well ... that's another problem and requires some more work to be done.
you have to know what you want (OK, a storage room)
is a single table enough to contain all information you'd want to collect?
if so, which columns (attributes) do you want to collect?
if not (for example, you'd want to "group" items), you'd need at least two tables which will be related to each other by the means of master-detail relationship, which also means that you'll have to create a foreign key constraint
which datatypes are appropriate for certain attributes? You wouldn't store item names into number datatype, right? Nor should you put dates (when item entered the storage room) as a string in varchar2 column, but into a date datatype column
etc.
Basically, YMMV.
For below script written in .sql files:
if not exists (select * from sys.tables where name='abc_form')
CREATE TABLE abc_forms (
x BIGINT IDENTITY,
y VARCHAR(60),
PRIMARY KEY (x)
)
Above script has a bug in table name.
For programming languages like Java/C, compiler help resolve most of the name resolutions
For any SQL script, How should one approach unit testing it? static analysis...
15 years ago I did something like you request via a lot of scripting. But we had special formats for the statements.
We had three different kinds of files:
One SQL file to setup the latest version of the complete database schema
One file for all the changes to apply to older database schema's (custom format like version;SQL)
One file for SQL statements the code uses on the database (custom format like statementnumber;statement)
It was required that every statement was on one line so that it could be extracted with awk!
1) At first I set up the latest version of the database by executing from statement after the other and logging the errors to a file.
2) Secondly I did the same for all changes to have a second schema
3) I compared the two database schemas to find any differences
4) I filled in some dummy test values in the complete latest schema for testing
5) Last but not least I executed every SQL statement against the latest schema with test data and logged every error again.
At the end the whole thing runs every night and there was no morning without new errors that one of 20 developers had put into the version control. But it saved us a lot of time during the next install at a new customer.
You could also generate the SQL scripts from your code.
Code first avoids these kinds of problems. Choosing between code first or database first usually depends on whether your main focus is on your data or on your application.
Does anyone know how the SchemaCompare in Visual Studio (using 2010 currently) determines how to handle [SQL Server 2008R2] database table updates (column data type, optionality, etc)?
The options are to:
Use separate ALTER TABLE statements
Create a new table, copy the old data into the new table, rename the old table before the new one can be renamed to assume the proper name
I'm asking because we have a situation involving a TIMESTAMP column (for optimistic locking). If SchemaCompare uses the new table approach, the TIMESTAMP column values will change & cause problems for anyone with the old TIMESTAMP values.
I believe Schema Compare employs the same CREATE-COPY-DROP-RENAME (CCDR) strategy as VSTSDB described here: link
Should be able to confirm this by running a compare and scripting out the deploy, no?
I'm comparing two SQL server databases (development and live environment, SQL2005 and SQL2008 respectively) to check for differences between the two. If I generate a script for each database I can use a simple text comparison to highlight the differences.
Problem is the scripts need to be in the same order to ease comparison and avoid simple differences where the order of the stored procedures is different, but their contents are the same.
So if I generate this from development:
1: CREATE TABLE dbo.Table1 (ID INT NOT NULL, Name VARCHAR(100) NULL)
2: CREATE TABLE dbo.Table2 (ID INT NOT NULL, Name VARCHAR(100) NULL)
3: CREATE TABLE dbo.Table3 (ID INT NOT NULL, Name VARCHAR(100) NULL)
And this from live:
1: CREATE TABLE dbo.Table1 (ID INT NOT NULL, Name VARCHAR(100) NULL)
2: CREATE TABLE dbo.Table3 (ID INT NOT NULL, Name VARCHAR(100) NULL)
3: CREATE TABLE dbo.Table2 (ID INT NOT NULL, Name VARCHAR(100) NULL)
Comparing the two highlights lines 2 and 3 as different, but they're actually identical, just the generate script wizard did table3 before table 2 on the live environment. Add in 100's of tables, stored procedures, views, etc. and this quickly becomes a mess.
My current options are:
Manually sort the contents before comparison
Create a program to create the scripts in a specific order
Find a freeware application that sorts the generated scripts
Pay for a product that does this as part of its suite of tools
(Some other way of doing this)
Hopefully, I'm only missing the checkbox that says "Sort scripts by name", but I can't see anything that does this. I don't feel I should have to pay for something as simple as a 'sort output' option or lots of other unneeded tools, so option 4 should just be a last resort.
EDIT
I have full access to both environments, but the live environment is locked down and hosted on virtual servers, with remote desktoping the typical way to access live. My preference is to copy what I can to development and compare there. I can generate scripts for each type of object in the database as separate files (tables, SP's, functions, etc.)
Depending on your version of Visual Studio 2010 (if you have it), you can do this easily via the data menu, based on your original intent, you might save yourself some time.
Edit: Generating the actual DB's and then comparing the schema comparison tool as shown below is the same net effect as comparing two script files and you don't have to worry about line breaks etc.
Red_gate's SQLCompare is the best thing to use for this, worth every penny.
This is quite hard to do with scripts - because SQL will tend to generate the tables/objects in the order that makes sense to it (eg dependency order) rather than alphabetical order.
There are other complications that come up when you start comparing databases - for example the names of constraint objects may be randomly generated, so the same constraint may have different names in each DB.
Your best bet is probably option (4) I'm afraid ... an evaulation copy of Red Gate Sql Compare - free for 30 days. I've used it a lot and its very good at pinpointing the differences that matter. It will then generate you a script to bring the two schemas back into sync.
edit: or Visual Studio 2010 Ultimate (or Premium) can do it apparently - see kd7's answer
You can use WinMerge to some extent to find out if the lines are found elsewhere when comparing two generated scripts. I think it works in the simpler cases.
Using WinMerge v2.12.4.0 Unicode. Note the color usage for highlighting these below.
Here is the help for the Edit -> Options -> Compare "Enable moved block detection":
3.6. Enable moved block detection
Disabled (default): WinMerge does not
detect when differences are due to moved lines.
Enabled: WinMerge tries to detect lines that are moved (in different
locations in each file). Moved blocks are indicated by the Moved and
Selected Moved difference colors. If the Location bar is displayed,
corresponding difference locations in the left and right location bars
are connected with a line. Showing moved blocks can make it easier to
visualize changes in files, if there are not too many.
For an example, see the Location pane description in Comparing and
merging files.
I had a similar issue. My database is SQL Server 2008. I realized that, if I generate scripts of through object explorer details, then I get that order that I are viewing names. In this way I was able to compare 2 databases and findout their differences.
The only problem with this is that I had to take out separate scripts for tables/ stored procedures, triggers etc.
But we can compare easily.