How to version control SQL Server databases? - sql-server

I have SQL Server databases and do changes in them. Some database tables have records that are starting records required my app to run. I would like to do version control over database and these records (rows). Is it possible to do this and bundle it to SVN version control I have for my source code or are there other solutions to this? I would like to accomplish this to be able to return to previous version of database and compare changes between database revisions. It would be nice if tools for this are free, open source or not very expensive.
My environment is Visual C# Express, SQL Server 2008 Express and Tortoise SVN.

Late answer but hopefully useful to other readers
I can suggest using the SSMS add-in called ApexSQL Source Control. By utilizing this add-in, developers can easily map database objects with the source control system via the wizard directly from SSMS. It includes support for Git, TFS, Mercurial, Subversion, TFS (including Visual Studio Online) and other Source Control systems. It also includes support for source controlling Static data (so you can version control records also).
After downloading and installing ApexSQL Source Control, simply right-click the database you want to version control and navigate to ApexSQL Source Control sub-menu in SSMS. Click the “Link database to source control” option and select the source control system and the database development model, for example:
After that, you may exclude objects you don’t want to be linked to source control. It is possible to exclude specific objects by owner or type.
On the next step, you will be prompted to provide the log-in information for the source control management system:
Once done, just click the “Finish” button and the “Action center” window will be shown, offering the objects that will be committed to the repository (this is by default, if the repository is empty).
Once the database has been linked to source control, all the operations that can be executed from a source control client will be available from the “Object Explorer” pane. Those include:
checking out with or without lock the versioned objects,
view history of that object and apply specific revision,
view changes on that object that were made and
place data from table to source control using the “Link static data”
You can read this article for more information: http://solutioncenter.apexsql.com/sql-source-control-reduce-database-development-time/

We've just started doing the following on some of our projects, and it seems to work quite well, for populating "static" tables.
Our scripts follow a pattern where a temp table is constructed, and is then populated with what we want the real table to resemble. We only put human readable values here (i.e. we don't include IDENTITY/GUID columns). The remainder of the script takes the temp table and performs appropriate INSERT/UPDATE/DELETE statements to make the real table resemble the temp table. When we have to change this "static" data, all we have to update is the population of the temp table. This means that DIFFing between versions works as expected, and rollback scripts are as simple as getting a previous version from source control.
The INSERT/UPDATE/DELETEs only have to be written once. In fact, our scripts are slightly more complicated, and have two sets of validation run before the actual DML statements. One set validate the temp table data (i.e. that we're not going to violate any constraints by attempting to make the database resemble the temp table). The other validate the temp table and the target database (i.e. that foreign keys are available).

Static data support is being added to SQL Source Control 2.0, currently available in beta. More information on how to try this can be found here:
http://www.red-gate.com/messageboard/viewtopic.php?t=12298

There is a free microsoft product called Database Publishing Wizard which you can use to script the entire database (schema and data). It is great for taking snapshots of the current state of a DB and will enable you to recreate from scratch at any point

For database (schema) versioning we use custom properties, which are added to the database when the installer is ran. The contents of these scripts is generated with our build scripts.
The script to set the properties looks like this:
DECLARE #AssemblyDescription sysname
SET #AssemblyDescription = N'DailyBuild_20090322.1'
DECLARE #AssemblyFileVersion sysname
SET #AssemblyFileVersion = N'0.9.3368.58294'
-- The extended properties DatabaseDescription and DatabaseFileVersion contain the
-- AssemblyDescription and AssemblyFileVersion of the build that was used for the
-- database script that creates the database structure.
--
-- The current value of these properties can be displayed with the following query:
-- SELECT * FROM sys.extended_properties
IF EXISTS (SELECT * FROM sys.extended_properties WHERE class_desc = 'DATABASE' AND name = N'DatabaseDescription')
BEGIN
EXEC sys.sp_updateextendedproperty #name = N'DatabaseDescription', #value = #AssemblyDescription
END
ELSE
BEGIN
EXEC sys.sp_addextendedproperty #name = N'DatabaseDescription', #value = #AssemblyDescription
END
IF EXISTS (SELECT * FROM sys.extended_properties WHERE class_desc = 'DATABASE' AND name = N'DatabaseFileVersion')
BEGIN
EXEC sys.sp_updateextendedproperty #name = N'DatabaseFileVersion', #value = #AssemblyFileVersion
END
ELSE
BEGIN
EXEC sys.sp_addextendedproperty #name = N'DatabaseFileVersion', #value = #AssemblyFileVersion
END
GO

You can get a version of SQL Management Studio for SQL Server Express. I believe you'll be able to use this to produce scripts of the schema of your database. I think that will leave you to create scripts by hand for inserting the starting records.
Then, put all the scripts into source control, along with a master script that runs the individual scripts in the correct order.
You'll be able to run diffs using windiff (free with Visual Studio SDK), or else Beyond Compare is inexpensive, and a great diff/merge/sync tool.

MS Visual Studio Team System for Database Developers has functionality to easily generate create scripts for the whole schema. Only drawback is the cost!
Have you considered using SubSonic?

You should rather use DB specific versioning.
http://msdn.microsoft.com/en-us/library/ms189050.aspx
When either the
READ_COMMITTED_SNAPSHOT or
ALLOW_SNAPSHOT_ISOLATION database
options are ON, logical copies
(versions) are maintained for all data
modifications performed in the
database. Every time a row is modified
by a specific transaction, the
instance of the Database Engine stores
a version of the previously committed
image of the row in tempdb. Each
version is marked with the transaction
sequence number of the transaction
that made the change. The versions of
modified rows are chained using a link
list. The newest row value is always
stored in the current database and
chained to the versioned rows stored
in tempdb.

I use bcp for this (bulk loading utility, part of a standard SQL Server install, Express edition included).
Each table with data needs a control file Table.ctl and a data file Table.csv (these are text files that can be generated from an existing database using bcp). As text files, these can very easily be versioned.
As part of my generation batches (see my answer there for more information), I iterate through every control file like this :
SET BASE_NAME=MyDatabaseName
SET CONNECT_STRING=.\SQLEXPRESS
FOR /R %%i IN (.) DO (
FOR %%j IN ("%%~fi\*.ctl") DO (
ECHO + %%~nj
bcp %BASE_NAME%..%%~nj in "%%~dpsj%%~nj.csv" -T -E -S %CONNECT_STRING% -f "%%~dpsj%%~nj.ctl" >"%TMP%\%%~nj.log"
IF %ERRORLEVEL% GTR 0 (
TYPE "%TMP%\%%~nj.log"
GOTO ERROR_USAGE
)
)
)
A current limitation of this script is that the name of the file must be the name of the table, which may not be possible if the table name contains specific special characters.

This project has a good example of deploy and rollback

Related

End user initiating SQL commands to create a file from a SQL table?

Using SQL Manager ver 18.4 on 2019 servers.
Is there an easier way to allow an end user with NO access to anything SQL related to fire off some SQL commands that:
1.)create and update a SQL table
2.)then create a file from that table (csv in my case) that they have access to in a folder share?
Currently I do this using xp_command shell with bcp commands in a cloud hosted environment, hence I am not in control of ANY permission or access, etc. For example:
declare #bcpCommandIH varchar(200)
set #bcpCommandIH = 'bcp "SELECT * from mydb.dbo.mysqltable order by 1 desc" queryout E:\DATA\SHARE\test\testfile.csv -S MYSERVERNAME -T -c -t, '
exec master..xp_cmdshell #bcpCommandIH
So how I achieve this now is allowing the end users to run a Crystal report which fires a SQL STORED PROCEDUE, that runs some code to create and update a SQL table and then it creates a csv file that the end user can access. Create and updating the table is easy. Getting the table in the hands of the end user is nothing but trouble in this hosted environment.
We always end up with permission or other folder share issues and its a complete waste of time. The cloud service Admins tell me "this is a huge security issue and you need to start and stop the xp_command shell with some commands every time you want generate this file to be safe".
Well this is non-sense to me. I wont want to have to touch any of this and it needs to be AUTOMATED for the end user start to finish.
Is there some easier way to AUTOMATE a process for an END USER to create and update a SQL table and simply get the contents of that table exported to a CSV file without all the administration trouble?
Are there other simpler options than xp_command shell and bcp to achieve this?
Thanks,
MP
Since the environment allows you to run a Crystal Report, you can use the report to create a table via ODBC Export. There are 3rd-party tools that allow that to happen even if the table already exists (giving you an option to replace or append records to an existing target table).
But it's not clear why you can't get the data directly into the Crystal report and simply export to csv.
There are free/inexpensive tools that allow you to automate/schedule the exporting/emailing/printing of a Crystal Report. See list here.

Publish of Database Project fails because deployment script attempts to drop and re-create an unmodified table

I'm using Visual Studio 15.8.5 with Sql Server Data Tools 15.1.
I've created an SQL Server database project and imported the schema of an already existing database. I've made several minor changes to a few tables of the database and published the updates to the development database without any problems.
After adding a few SQL scripts to the project, all of them with:
Build Action = None
publish fails, despite no changes have been made in any of the database objects of the project.
This is the part of the auto-generated publish script that causes the problem:
/*
The table [lut].[KAE] is being dropped and re-created since all
non-computed columns within the table have been redefined.
*/
IF EXISTS (select top 1 1 from [lut].[KAE])
RAISERROR (N'Rows were detected. The schema update is terminating
because data loss might occur.', 16, 127) WITH NOWAIT
GO
Table [lut].[KAE] has not been changed, though. One of the scripts is redefining its schema but this should make no difference since this is a 'No Build' script.
What am I possibly doing wrong here?
Edit:
I've done a schema comparison as #MadBert advised. I originally used my actual database as source and my sql server visual studio project as target. No differences were found.
I then switched source and target databases and compared again. The following 'difference' was detected.
As you can see this is not an actual difference, it looks like a Visual Studio bug in schema comparison. Any ideas on how I could circumvent this behavior?
It turned out that a refactor log file was the culprit.
I tried to publish to an empty database, as #Ogglas wisely advised. I noticed that during publish I was getting the following message:
The following operation was generated from a refactoring log file
8e659d92-10bb-4ce9-xxxx-xxxxxxxxx Rename [lut].[KAE].[xxxxx] to
$$$$$$$$$ Caution: Changing any part of an object name could
break scripts and stored procedures.
I then noticed that my SQL Server Database project contained a .refactorlog file
It seems that this log file was generated after I changed the offending table schema. The schema of the table was later reverted to its original state but the log file remained.
I deleted this log file and after that publish finally succeeded!
Had a similar problem when a SQL Server Database project was set to the wrong Target platform. Edit this in project properties to match the target server. Initiate a schema compare again by right clicking on the project and select Schema Compare....
Also check if Ignore whitespace is marked in Schema Compare Options. If you still have a difference one way or another try pasting the text in Notepad++ with Show All Characters on and see if you can spot a difference.
If you still can't find any difference, try creating a new database from the project and use SSMS GUI to compare. Does the table have the same Lock Escalation settings etc?

How to store queries in SQL Server database?

I manage a mini database and I write procedures for complex transactions and data cleansing. I also do a lot of ad-hoc querying and I save all of my queries in a folder. Is there any way I can save these queries in the database so that some of my peers can review my SQL queries?
In my search, I understand that I can write a procedure for smaller queries too. But I want to know if there is another method to do this?
For a select statement use a view:
CREATE VIEW MyView
AS
SELECT Columns FROM TABLE
Now you can select from that
SELECT * FROM MyView
and join to it:
SELECT * FROM MyView
INNER JOIN SomethingElse
ON MyView.ID = SomethingElse.ID
For scripts that update/delete/insert or do procedural things in order, use a stored procedure instead.
You can have queries (views) persisted in the database itself. You can use the CREATE statement to create views, stored procedures, table value functions etc. which will be accessible through intelisense and show up in the database object tree
You can create a new folder in the Template Browser and add code in new templates. If you want to share these ACROSS your team using SSMS, you can also do the following: You DO have to store the code elsewhere, but it can be accessed within SSMS by all users when set up this way on their machines:
See: https://www.sqlservercentral.com/articles/ssms-shared-sql-templates
Short synopsis:
Store code examples in central location and re-point SQL templates folder on each user machine to the central location, by using mklink to create new link to SQL folder under the following location so that the SQL folder will no longer point to it, but to the alternate central location path specified:
C:\Users\YourUserName\AppData\Roaming\Microsoft\SQL Server Management Studio\11.0\Templates
To do this open the command prompt and:
go to user path above and rename SQL folder found under there: ren Sql Sql_Old
create symbolic link: mklink /D Sql C:\ss\Internal\Code\TSQL\SSMS_Templates
if successful, you will see:
symbolic link created for Sql <<===>> path of central code
Afterwards, the template browser will link to the central location and show whatever is in there.

Visual Studio 2010 Database Project Deploy to Different Environments

I've got a Database project which works fine for my local MSSQL 2008 database.
It has a script under scripts/post-deployment that inserts standing data configuration into a settings table, and other tables. I've got 1 file for each table, e.g. a Setting.sql file to insert data into the Settings table.
The settings will be different depending on the database I deploy to.
How can I script this? Basicaly I would like to be able to have say 2 files,
Prod.Setting.sql and Dev.Setting.sql and VS 2010 would use the appropriate script depending on what database (environment) I am deploying to.
Completely doable but there two options and a few steps involved. You can have a complete duplicate set of scripts, one for each configuration. Or, you can have one set of scripts, the contents of which take into account the configuration you are using. I'll go with the first, and I'll keep it simple.
Create two solution configurations, or use Debug and Release if you like. I'll use these for the example.
For each configuration, create a new .sqlcmdvars file.
Database_Release.sqlcmdvars
Database_Debug.sqlcmdvars
Switch your solution configuration to each and in the database project properties change the variables file drop down to point at the corresponding file you created.
In each of those files you can define variables to be used during deployment. Create a new variable in each one
$(DeploymentConfiguration)
And set its value in each one to either Debug or Release
Then in any of your Pre or Post deployment scripts you can do something like so:
IF '$(DeploymentConfiguration)' = 'Debug'
BEGIN
PRINT 'Executing Debug deployment'
:r .\Debug\SomeNeededScript.sql
END
IF '$(DeploymentConfiguration)' = 'Release'
BEGIN
PRINT 'Executing Release deployment'
:r .\Release\Anotherscript.sql
END

How can I use a SQL Scripts in a Database Project with the System.Data.SQLite data provider?

I've got a project where I'm attempting to use SQLite via System.Data.SQLite. In my attempts to keep the database under version-control, I went ahead and created a Database Project in my VS2008. Sounds fine, right?
I created my first table create script and tried to run it using right-click->Run on the script and I get this error message:
This operation is not supported for the provider or data source you are using.
Does anyone know if there's an automatic way to use scripts that are part of database project against SQLite databases referenced by the databases, using the provider supplied by the System.Data.SQLite install?
I've tried every variation I can think of in an attempt to get the script to run using the default Run or Run On... commands. Here's the script in it's most verbose and probably incorrect form:
USE Characters
GO
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = 'Skills')
BEGIN
DROP Table Skills
END
GO
CREATE TABLE Skills
(
SkillID INTEGER PRIMARY KEY AUTOINCREMENT,
SkillName TEXT,
Description TEXT
)
GO
Please note, this is my first attempt at using a Database, and also the first time I've ever touched SQLite. In my attempts to get it to run, I've stripped any and everything out except for the CREATE TABLE command.
UPDATE: Ok, so as Robert Harvey points out below, this looks like an SQL Server stored procedure. I went into the Server Explorer and used my connection (from the Database project) to get do what he suggested regarding creating a table. I can generate SQL from to create the table and it comes out like thus:
CREATE TABLE [Skills] (
[SkillID] integer PRIMARY KEY NOT NULL,
[SkillName] text NOT NULL,
[Description] text NOT NULL
);
I can easily copy this and add it to the project (or add it to another project that handles the rest of my data-access), but is there anyway to automate this on build? I suppose, since SQLite is a single-file in this case that I could also keep the built database under version-control as well.
Thoughts? Best practices for this instance?
UPDATE: I'm thinking that, since I plan on using Fluent NHibernate, I may just use it's auto-persistence model to keep my database up-to-snuff and effectively in source control. Thoughts? Pitfalls? I think I'll have to keep initial population inserts in source-control separately, but it should work.
I built my database using an SQLite SQL script and then fed that into the sqlite3.exe console program like this.
c:\sqlite3.exe mydatabase.db < FileContainingSQLiteSQLCommands
John
Well, your script looks like a SQL Server stored procedure. SQLite most likely doesn't support this, because
It doesn't support stored procedures, and
It doesn't understand SQL Server T-SQL
SQL is actually a pseudo-standard. It differs between vendors and sometimes even between different versions of a product within the same vendor.
That said, I don't see any reason why you can't run any (SQLite compatible) SQL statement against the SQLite database by opening up connection and command objects, just like you would with SQL Server.
Since, however, you are new to databases and SQLite, here is how you should start. I assume you already have SQLite installed
Create a new Windows Application in Visual Studio 2008. The database application will be of no use to you.
Open the Server Explorer by pulling down the View menu and selecting Server Explorer.
Create a new connection by right-clicking on the Data Connections node in Server Explorer and clicking on Add New Connection...
Click the Change button
Select the SQLite provider
Give your database a file name.
Click OK.
A new Data Connection should appear in the Server Explorer. You can create your first table by right-clicking on the Tables node and selecting Add New Table.

Resources