I am working on SSIS Package .I added one more data flow task to existing ssis package.After complition of adding new task i rebuilded the Package it was suceed with out any errors .
Do i need to deploy it to Development server?
Background
The 2012 SSIS Project Deployment model in Visual Studio contains a file for project parameters, project level connection managers, packages and anything else you've added to the project.
In the following picture, you can see that I have a Solution named Lifecycle. That solution has a project named Lifecycle. The Lifecycle project has a Project Level Connection Manager ERIADOR defined and two SSIS packages: Package00.dtsx and Package01.dtsx.
When you run a package, behind the scenes Visual Studio will first build/compile all the required project elements into a deployable quantum called an ispac (pronounced eye-ess-pack, not ice-pack). This will be found in the bin\Development subfolder for your project.
Lifecycle.ispac is a zip filed with the following contents.
What's all this mean? The biggest difference is that instead of just deploying an updated package, you'll need to deploy the whole .ispac. Yes, you really have to redeploy everything even though you only changed one package. Such is life.
How do I deploy packages using the SSIS Project Deployment model?
You have a host options available to you but at the 3 things you will need to know are
where is my ispac
what server am I deploying to
what folder does this project to
SSDT
This will probably be your most common option in the beginning. Within SQL Server Data Tools, SSDT, you have the ability to define at the Configuration Manager level what server and what folder things are deployed to. At my client, I have 3 configurations: Dev, Stage, Production. Once you define those values, they get saved into the .dtproj file and you can then right click and deploy to your heart's content from visual studio.
ISDeploymentWizard - GUI flavor
SSDT is really just building the call to the ISDeploymentWizard.exe which comes in 32 and 64 bit flavors for some reason.
C:\Program Files\Microsoft SQL Server\110\DTS\Binn\ISDeploymentWizard.exe
C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\ISDeploymentWizard.exe
An .ispac extension is associated to the ISDeploymentWizard so double click and away you go. The first screen is new compared to using the SSDT interface but after that, it will be the same set of clicks to deploy.
ISDeploymentWizard - command line flavor
What they got right with the 2012 release that sucked with the package deployment model was that the manifest file can be deployed in an automated fashion. I had a workaround but it should have been a standard "thing".
So look carefully at the Review tab from either the SSDT or GUI deploy. Isn't that a beauty?
Using the same executable, ISDeploymentWizard, we can have both an attended and unattended installer for our .ispac(s). Highlight the second line there, copy paste and now you can have continuous integration!
C:\Program Files\Microsoft SQL Server\110\DTS\Binn\ISDeploymentWizard.exe
/Silent
/SourcePath:"C:\Dropbox\presentations\SSISDB Lifecycle\Lifecycle\Lifecycle\bin\Development\Lifecycle.ispac"
/DestinationServer:"localhost\dev2012"
/DestinationPath:"/SSISDB/Folder/Lifecycle"
TSQL
You can deploy an ispac to SQL Server through SQL Server Management Studio, SSMS, or through the command line, sqlcmd.exe. While SQLCMD is not strictly required, it simplifies the script.
You must use a windows account to perform this operation though otherwise you'll receive the following error message.
The operation cannot be started by an account that uses SQL Server Authentication. Start the operation with an account that uses Windows Authentication.
Furthermore, you'll need the ability to perform bulk operations (to serialize the .ispac) and ssis_admin/sa rights to the SSISDB database.
Here we use the OPENROWSET with the BULK option to read the ispac into a varbinary variable. We create a folder via catalog.create_folder if it doesn't already exist and then actually deploy the project with catalog.deploy_project. Once done, I like to check the operations messages table to verify things went as expected.
USE SSISDB
GO
-- You must be in SQLCMD mode
-- setvar isPacPath "C:\Dropbox\presentations\SSISDB Lifecycle\Lifecycle\Lifecycle\bin\Development\Lifecycle.ispac"
:setvar isPacPath "<isPacFilePath, nvarchar(4000), C:\Dropbox\presentations\SSISDB Lifecycle\Lifecycle\Lifecycle\bin\Development\Lifecycle.ispac>"
DECLARE
#folder_name nvarchar(128) = 'TSQLDeploy'
, #folder_id bigint = NULL
, #project_name nvarchar(128) = 'TSQLDeploy'
, #project_stream varbinary(max)
, #operation_id bigint = NULL;
-- Read the zip (ispac) data in from the source file
SELECT
#project_stream = T.stream
FROM
(
SELECT
*
FROM
OPENROWSET(BULK N'$(isPacPath)', SINGLE_BLOB ) AS B
) AS T (stream);
-- Test for catalog existences
IF NOT EXISTS
(
SELECT
CF.name
FROM
catalog.folders AS CF
WHERE
CF.name = #folder_name
)
BEGIN
-- Create the folder for our project
EXECUTE [catalog].[create_folder]
#folder_name
, #folder_id OUTPUT;
END
-- Actually deploy the project
EXECUTE [catalog].[deploy_project]
#folder_name
, #project_name
, #project_stream
, #operation_id OUTPUT;
-- Check to see if something went awry
SELECT
OM.*
FROM
catalog.operation_messages AS OM
WHERE
OM.operation_message_id = #operation_id;
Your MOM
As in, your Managed Object Model provides a .NET interface for deploying packages. This is a PowerShell approach for deploying an ispac along with creating the folder as that is an option the ISDeploymentWizard does not support.
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.IntegrationServices") | Out-Null
#this allows the debug messages to be shown
$DebugPreference = "Continue"
# Retrieves a 2012 Integration Services CatalogFolder object
# Creates one if not found
Function Get-CatalogFolder
{
param
(
[string] $folderName
, [string] $folderDescription
, [string] $serverName = "localhost\dev2012"
)
$connectionString = [String]::Format("Data Source={0};Initial Catalog=msdb;Integrated Security=SSPI;", $serverName)
$connection = New-Object System.Data.SqlClient.SqlConnection($connectionString)
$integrationServices = New-Object Microsoft.SqlServer.Management.IntegrationServices.IntegrationServices($connection)
# The one, the only SSISDB catalog
$catalog = $integrationServices.Catalogs["SSISDB"]
$catalogFolder = $catalog.Folders[$folderName]
if (-not $catalogFolder)
{
Write-Debug([System.string]::Format("Creating folder {0}", $folderName))
$catalogFolder = New-Object Microsoft.SqlServer.Management.IntegrationServices.CatalogFolder($catalog, $folderName, $folderDescription)
$catalogFolder.Create()
}
return $catalogFolder
}
# Deploy an ispac file into the SSISDB catalog
Function Deploy-Project
{
param
(
[string] $projectPath
, [string] $projectName
, $catalogFolder
)
# test to ensure file exists
if (-not $projectPath -or -not (Test-Path $projectPath))
{
Write-Debug("File not found $projectPath")
return
}
Write-Debug($catalogFolder.Name)
Write-Debug("Deploying $projectPath")
# read the data into a byte array
[byte[]] $projectStream = [System.IO.File]::ReadAllBytes($projectPath)
# $ProjectName MUST match the value in the .ispac file
# else you will see
# Failed to deploy the project. Fix the problems and try again later.:The specified project name, test, does not match the project name in the deployment file.
$projectName = "Lifecycle"
$project = $catalogFolder.DeployProject($projectName, $projectStream)
}
$isPac = "C:\Dropbox\presentations\SSISDB Lifecycle\Lifecycle\Lifecycle\bin\Development\Lifecycle.ispac"
$folderName = "Folder"
$folderName = "SSIS2012"
$folderDescription = "I am a description"
$serverName = "localhost\dev2012"
$catalogFolder = Get-CatalogFolder $folderName $folderDescription $serverName
Deploy-Project $isPac $projectName $catalogFolder
Here is an update on deploying a single package in SSIS 2016 (hope this can be useful).
With the release of SQL Server 2016 and SSDT 2015 the issue of single package deployment is now a thing of the past. There is the new Deploy Package option (VS 2015) that comes up for deploying individual packages within a project deployment model
With this new feature, you can also deploy multiple packages, by clicking and holding down the control key (Ctrl) and then choosing the packages you want to deploy.
Besides the Deploy Package option in Visual Studio 2015, there are some other possibilities you may use to deploy packages, like launching ISDeploymentWizard application or doing Command Line Deployment (this one is necessary when SSIS build and deployment is automated or managed as part of Continuous Integration process). You can learn more by navigating to this article: http://www.sqlshack.com/single-package-deployment-in-sql-server-integration-services-2016/
If you are using Project Model in SSIS 2012, you have to deploy the project every time you make any change in package.
You can simply do is :
RIGHT Click on Project and Deploy
Related
I am having a VERY difficult time publishing a pre-existing SQL Server project to a Docker hosted instance of SQL Server.
What I am attempting to do is make a clean pipeline for a Docker hosted instance to use in testing a SQL Server project, which of course starts with doing it first by hand to understand all the steps involved. The SQL Server project itself has been around for many years, and has no problems deploying to SQL Server instances hosted on Windows boxes.
As near as I can tell, the issue comes while SSDT is generating the SQL Server deployment script itself. In a normal deployment to a Windows hosted SQL Server, the generated script starts out with some :setvar commands, including:
:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
However, when publishing to a Docker hosted instance of SQL Server, and the same deployment process, the SQL script has:
:setvar DefaultDataPath ""
:setvar DefaultLogPath ""
The 1st thing this deployment does is to alter the database by adding in an additional data file, e.g.:
ALTER DATABASE [$(DatabaseName)]
ADD FILE (NAME = [ARCHIVE_274A259D], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_ARCHIVE_274A259D.mdf') TO FILEGROUP [ARCHIVE];
The Docker based deployment then craps itself because the file path is (obviously) invalid.
In researching this problem, I've seen MANY solutions which hand-edit the generated deployment SQL script, and manually set the "proper" values for DefaultDataPath and DefaultLogPath ... and even one solution that ran the generated Sql through some sort of post-processor to make that same edit in a programmatic way with string replacement. This does work, but is less than optimal (especially in an automated build/test/deploy pipeline).
I've checked in the Docker instance itself, and its mssql.conf file does have defaults defined:
$ cat /var/opt/mssql/mssql.conf
[sqlagent]
enabled = false
[filelocation]
defaultdatadir = /var/opt/mssql/data/
defaultlogdir = /var/opt/mssql/log/
Can anybody shed light on why these are not being picked up by the SSDT process of generating the deploy script?
I spent a few days trying various workarounds to the problem ...
Defined the DATA and LOG directories in the Docker "run" command, but this had no effect on the gnerated Sql deploy script, e.g.: -e 'MSSQL_DATA_DIR=/var/opt/mssql/data/' -e 'MSSQL_LOG_DIR=/var/opt/mssql/log/'
Configure the Sql Project with SQLCMD Variables. This method could not override the DefaultDataPath or DefaultLogPath. I could add new Variables, but those would not affect the file path of the ALTER DATABASE command above.
Tried a Pre-Deployment script specifically tailored to override the values of DefaultDataPath and DefaultLogPath. While this technically CAN override the default values, the Pre-Deployment script is included in the generated Sql deployment script after the ALTER DATABASE commands to add data files. It would effectively work for the rest of the script, just not the specific portion that was throwing the error on initial deployment of the database.
At this point I feel there is either a Sql Server configuration option that I am simply unaware of, or possibly a flaw in SSDT which is preventing it from gathering the Default Path values from the Docker Sql Server instsance. Any ideas?
Has anyone else met a similar problem to the one described below?
I am having a problem deploying a SQL server 2012 dacpac database upgrade with Powershell. The details are as follows:
Its a dacpac file built for sql server 2012 and I'm trying to apply it to a sql server 2012 database via Powershell run from the command line when logged in as administrator.
Exception calling "Deploy" with "4" argument(s): "Unable to determine the identity of domain."
At ... so.ps1:17 char:8
+ $d.Deploy($dp, $TargetDatabase,$true,$DeployOptions)
The redacted script (logging and literals changed) is as follows:
[System.Reflection.Assembly]::LoadFrom("C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll") | Out-Null
$d = new-object Microsoft.SqlServer.Dac.DacServices ("... Connection string ...")
$TargetDatabase = "databasename"
$fullDacPacPath = "c:\temp\...\databasename.dacpac"
# Load dacpac from file & deploy to database named pubsnew
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($fullDacPacPath)
$DeployOptions = new-object Microsoft.SqlServer.Dac.DacDeployOptions
$DeployOptions.IncludeCompositeObjects = $true
$DeployOptions.IgnoreFileSize = $false
$DeployOptions.IgnoreFilegroupPlacement = $false
$DeployOptions.IgnoreFileAndLogFilePath = $false
$DeployOptions.AllowIncompatiblePlatform = $true
$d.Deploy($dp, $TargetDatabase,$true,$DeployOptions)
Here is some supporting information:
Dac framework version is 11.1
The script throws the error when run on the command line:
ie. Powershell -File databaseupgrade.ps1
but not when run in the Powershell integrated script environment
Similar scripts work from the command line for other dacpacs.
Research on the web might suggest that it might be something to do with the size of dacpac. The ones that work are all smaller than the one that does not and this link mentions a figure of 1.3mb which the file size of the failing dacpac just exceeds. If anyone can confirm that this is the problem can you also suggest a solution?
Update
The following script exhibits the same behavior ie. works in PS Ide not from command line.
[Reflection.Assembly]::LoadWithPartialName("System.IO.IsolatedStorage")
$f = [System.IO.IsolatedStorage.IsolatedStorageFile]::GetMachineStoreForDomain();
Write-Host($f.AvailableFreeSpace);
I believe this issue here (at least in our case) is actually when the dacpac is working with a database that utilizes multiple filegroups. When doing the comparison for deployment my hypothesis is that it's utilizing the IsolatedStorage for the different files.
The link above was helpful, but it was not the entry as much as the last comment on that blog by Tim Lewis. I modified his code to work in native powershell. Putting this above the SMO assembly load should fix this issue:
$replacementEvidence = New-Object System.Security.Policy.Evidence
$replacementEvidence.AddHost((New-Object System.Security.Policy.Zone ([Security.SecurityZone]::MyComputer)))
$currentAppDomain = [System.Threading.Thread]::GetDomain()
$securityIdentityField = $currentAppDomain.GetType().GetField("_SecurityIdentity", ([System.Reflection.BindingFlags]::Instance -bOr [System.Reflection.BindingFlags]::NonPublic))
$securityIdentityField.SetValue($currentAppDomain,$replacementEvidence)
Edit - this answer is incorrect, see the link added in the original question for information about the real root cause.
It sounds like you're trying to connect with Windows Authentication and that's the cause of the failure (see this post as it seems to cover the error message you're getting). Change your connection string to use SQL Authentication or ensure that the user your powershell script is running as both has a domain-joined identity and has permissions to access the server. Basically, this is a SQL connection issue not a DAC issue.
It's been a few days now so I don't think a proper explanation will be forthcoming. I'll just post this as our workaround for anyone else who finds themselves in this situation.
There is a Microsoft command line program SqlPackage.exe that is fairly easy to get hold of. It will silently deploy a dacpac, can be executed in Powershell and has parameters that support all the options that we need.
If we use this instead of the Dac services assembly directly the domain problem does not arise.
My Query is regarding using NOT hard coded File Locations to initialize the the Variables DefaultDataPath and DefaultLogPath. Prior to adopt Database Projects as our standard Deployment and Database Management Tools and migrating our existing Scripts to Database projects we have been using the SET of CREATE and INITIALIZE scripts for Setting up Database. We are having following SQL Query to CREATE the Database with the FILE location:
SET #data_path = (SELECT SUBSTRING(filename, 1, CHARINDEX(N'master.mdf', LOWER(filename)) - 1)
FROM sys.sysaltfiles WHERE dbid = 1 AND fileid = 1);
set #mdb_file=#data_path + 'CF_DB.mdf'
set #cfdata='CF_DB_Data'
set #cflog='CF_DB_Log'
set #ldf_file=#data_path + 'CF_DB_log.ldf'
declare #sql nvarchar(500)
set #sql = 'CREATE DATABASE [CF_DB] ON (NAME = ' + quotename(#cfdata) + ',FILENAME =' + quotename(#mdb_file) + ',SIZE = 53, FILEGROWTH = 10%) LOG ON (NAME =' + quotename(#cflog) + ',FILENAME = ' + quotename(#ldf_file) + ', SIZE = 31, FILEGROWTH = 10%)COLLATE SQL_Latin1_General_CP1_CI_AS'
exec(#sql)
Here we are trying to figure out the location of MDF file for MASTER DB and using the same location to CREATE DATABASE.
Problem: With the scripts generated (after Deploy action) , there is an auto Generated SQLCMD variables , initialized with some default path (hardcoded one ) or Empty strings (which are using Default Datafile path used by SQL Server 2008 or 2005).
:setvar DatabaseName "CF"
:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER2008\MSSQL\DATA\"
:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER2008\MSSQL\DATA\"
We need to make it work like our existing system. We need to know path of MASTER DB data and log files and using the same path to initialize DefaultDataPath and DefaultLogPath. We can't go with PreDeployment scripts because Database settings are done by Database Project generated script before embedding PreDeploymentScript in the final Deploy Scripts.
NEXT big thing: Developer need to switch to SQLCMD Mode in SQL Server Management Studio to run the scripts generated by DB Project. This is our implementation Team's requirement NOT TO USE SQLCMD mode to setup DATBASE. To overcome these step, I need to modify the generated SQL file and use SQL Variables instead of SQLCMD variables. Can we generate the clean SQL Statements and keeping automation script generation intact? I know both of these issues are corelated thus the solution for one is going to Fix the other one.
Thanks for any good suggestions or help upon the above discussions.
Regards
Sumeet
Not sure how best to handle your file path, though I suspect you will want to not use the Default File Path setting and instead use a new file path that you can control through a variable.
It sounds to me like you're trying to have the developers update their local machines easily. My recommendation would be to build out some batch files that do the following:
Set the PATH to include the location for MSBuild.exe
Get the location for your master database
Pass that location in to a variable
Run the MSBuild command to set your path variables to the local master path
and publish the database/changes.
Overall, that sounds like more trouble than it's really worth. I'd recommend that you send out a SQL Script to all of the developers getting them to properly set up their Default Data/Log paths and just use the defaults.
I do recommend that you look into setting up some batch files to run the MSBuild commands. You'll have a lot easier time getting the database builds to your developers without them generating scripts and running them locally. Alternatively, you could have them change their SSMS defaults to set SQLCMD mode on for their connections. SSDT made this a little nicer because it won't run at all without SQLCMD mode turned on - eliminated a lot of the messiness from VS2008/VS2010 DBProjects.
I used something like the following to build and deploy when we were on DB Projects:
msbuild /m .\MyDB\MyDB.dbproj /t:build
msbuild /m .\MyDB\MyDB.dbproj /t:deploy /p:TargetConnectionString="Data Source=localhost;Integrated Security=True;Pooling=False;" /p:TargetDatabase="MyDB"
When SSDT/VS generates the SQL file, it actually runs some queries against the server you've told it to connect to. The settings you're getting here for example...
:setvar DatabaseName "CF"
:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER2008\MSSQL\DATA\"
:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER2008\MSSQL\DATA\"
Are obtained from server settings from the target database connection you specified in your publish file/profile.
On the server that you are using to generate your scripts, open regedit.exe and search for the keys "DefaultLog" and "DefaultData" under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft - they should be in the same location. See if they match the settings your scripts are generating.
You can change these on the server/your PC (where ever you are pointing to) and it will generate the locations you enter in your generate SQL Scripts. Be cautious naturally around a server you do not own, or that is in use for production etc as this will change a setting on the server which points SQL Server where to place new databases. This is a different setting it seems than the one you enter in SQL Server properties -> Database Settings.
Hope that helps!
I have a Visual Studio 2010 Database project, from which I want to generate a script
that simply puts up this database to another machine. The problem is that i can't find a
solution for this.
As I started the project, I imported the shema from a database on my development pc.
The Schema Objects were generated and all tables and scripts where under 'Schema Objects -> Schemas -> dbo'. Over the time, some things changed, some where added. And by using right-click -> deploy,
the changes were made to my local database successfully.
But now I want to deploy to another machine. The problem is, that in the release folder of the project, there is only a xml dbschema file containing all tables and scripts that i can't import
with sql management studio (or i just can't find out how) and the a deployment script which is nothing more than some checks followed by the pre- and post- deployment script, but without any tables or scripts in it.
So please, how do i export the database from Visual Studio, so i can easily put it up on another machine?
Marks--
You likely have already resolved this, but I thought I should answer your questions for the benefit of others.
Yes, you can deploy from Visual Studio to different machines. You can also do it from the command line, using VSDBCMD. And you can create a WIX project to give a wizard for others to install it with.
If you can connect to the target database from your dev PC, you can deploy to it. To do this:
Select another Configuration from the Solution Configuration drop down. Normally, the Project will come with "Debug" and "Release" baked in. You can add another configuration to allow you to deploy to various targets by clicking "Configuration Manager."
Right-click your Project and select 'Properties', or simply double-click Properties under the project.
Click the Deploy tab. Notice that the Configuration: drop-down shows the same selected configuration as "active."
Change the Deploy Action to "Create a deployment script (.sql) and deploy to the database."
Next to Target Connection String, click "Edit" and use the dialog to create your deployment connection to the target database.
Fill in the Target database name, if different.
For each Deployment Configuration (e.g., Debug, Release, etc.), you will probably want a separate Deployment configuration file. If you click "New," you can create one for the current configuration. The new file will open, and you can check and uncheck important things about the deployment.
Note: If you check Always re-create the database, the script will DROP and CREATE your database. You will lose all your data on the target! Be careful what you select here. Most people leave that unchecked for a Production target. I check it for Development or Local because I want a fresh copy there.
Save your changes to the file and to Properties.
To deploy to the target, be sure to select the correct Configuration. Click Build/Deploy [My Database Name]. You probably should experiment with this so you are familiar with how it works before trying it on a live environment.
Good practices: build a similar environment to production ("Staging") and deploy there first, to test the deployment, and always back up the database before deploying, in case something goes wrong.
For more info, please see:
Working with Database Projects
Walkthrough: Put an Existing Database Schema Under Version Control
Visual Studio 2010 SQL Server Database Projects
Is it's possible to point your Visual Studio to your new target database? 1. Properties of your Database project, Deploy tab, set the fields in Target Database Settings.
Now when you generate a deploy script, the resulting SQL file will be the various CREATe / ALTER / DROP etc that will align the target database with your schema.
You could always create an empty database and then do a schema compare in Visual Studio between your database project and the new empty database. You can amend the generated schema update script to also create the database (since the script will be to update an existing empty database)
I'd like to automate the script generation in SQL Server Management Studio 2008.
Right now what I do is :
Right click on my database, Tasks, "Generate Scripts..."
manually select all the export options I need, and hit select all on the "select object" tab
Select the export folder
Eventually hit the "Finish" button
Is there a way to automate this task?
Edit : I want to generate creation scripts, not change scripts.
SqlPubwiz has very limited options compared to the script generation in SSMS. By contrast the options available with SMO almost exactly match those in SSMS, suggesting it is probably even the same code. (I would hope MS didn't write it twice!) There are several examples on MSDN like this one that show scripting tables as individual objects. However if you want everything to script correctly with a 'full' schema that includes 'DRI' (Declarative Referential Integrity) objects like foreign keys then scripting tables individually doesn't work the dependencies out correctly. I found it is neccessary to collect all the URNs and hand them to the scripter as an array. This code, modified from the example, works for me (though I daresay you could tidy it up and comment it a bit more):
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Sdk.Sfc;
// etc...
// Connect to the local, default instance of SQL Server.
Server srv = new Server();
// Reference the database.
Database db = srv.Databases["YOURDBHERE"];
Scripter scrp = new Scripter(srv);
scrp.Options.ScriptDrops = false;
scrp.Options.WithDependencies = true;
scrp.Options.Indexes = true; // To include indexes
scrp.Options.DriAllConstraints = true; // to include referential constraints in the script
scrp.Options.Triggers = true;
scrp.Options.FullTextIndexes = true;
scrp.Options.NoCollation = false;
scrp.Options.Bindings = true;
scrp.Options.IncludeIfNotExists = false;
scrp.Options.ScriptBatchTerminator = true;
scrp.Options.ExtendedProperties = true;
scrp.PrefetchObjects = true; // some sources suggest this may speed things up
var urns = new List<Urn>();
// Iterate through the tables in database and script each one
foreach (Table tb in db.Tables)
{
// check if the table is not a system table
if (tb.IsSystemObject == false)
{
urns.Add(tb.Urn);
}
}
// Iterate through the views in database and script each one. Display the script.
foreach (View view in db.Views)
{
// check if the view is not a system object
if (view.IsSystemObject == false)
{
urns.Add(view.Urn);
}
}
// Iterate through the stored procedures in database and script each one. Display the script.
foreach (StoredProcedure sp in db.StoredProcedures)
{
// check if the procedure is not a system object
if (sp.IsSystemObject == false)
{
urns.Add(sp.Urn);
}
}
StringBuilder builder = new StringBuilder();
System.Collections.Specialized.StringCollection sc = scrp.Script(urns.ToArray());
foreach (string st in sc)
{
// It seems each string is a sensible batch, and putting GO after it makes it work in tools like SSMS.
// Wrapping each string in an 'exec' statement would work better if using SqlCommand to run the script.
builder.AppendLine(st);
builder.AppendLine("GO");
}
return builder.ToString();
What Brann is mentioning from the Visual Studio 2008 SP1 Team Suite is version 1.4 of the Database Publishing Wizard. It's installed with sql server 2008 (maybe only professional?) to \Program Files\Microsoft SQL Server\90\Tools\Publishing\1.4. The VS call from server explorer is simply calling this. You can achieve the same functionality via the command line like:
sqlpubwiz help script
I don't know if v1.4 has the same troubles that v1.1 did (users are converted to roles, constraints are not created in the right order), but it is not a solution for me because it doesn't script objects to different files like the Tasks->Generate Scripts option in SSMS does. I'm currently using a modified version of Scriptio (uses the MS SMO API) to act as an improved replacement for the database publishing wizard (sqlpubwiz.exe). It's not currently scriptable from the command line, I might add that contribution in the future.
Scriptio was originally posted on Bill Graziano's blog, but has subsequently been released to CodePlex by Bill and updated by others. Read the discussion to see how to compile for use with SQL Server 2008.
http://scriptio.codeplex.com/
EDIT: I've since started using RedGate's SQL Compare product to do this. It's a very nice replacement for all that sql publishing wizard should have been. You choose a database, backup, or snapshot as the source, and a folder as the output location and it dumps everything nicely into a folder structure. It happens to be the same format that their other product, SQL Source Control, uses.
I wrote an open source command line utility named SchemaZen that does this. It's much faster than scripting from management studio and it's output is more version control friendly. It supports scripting both schema and data.
To generate scripts run:
schemazen.exe script --server localhost --database db --scriptDir c:\somedir
Then to recreate the database from scripts run:
schemazen.exe create --server localhost --database db --scriptDir c:\somedir
You can use SQL Server Management Object (SMO) to automate SQL Server 2005 management tasks including generating scripts: http://msdn.microsoft.com/en-us/library/ms162169.aspx.
If you're a developer, definitely go with SMO. Here's a link to the Scripter class, which is your starting point:
Scripter Class
I don't see powershell with SQLPSX mentioned in any of these answers... I personally haven't played with it but it looks beautifully simple to use and ideally suited to this type of automation tasks, with tasks like:
Get-SqlDatabase -dbname test -sqlserver server | Get-SqlTable | Get-SqlScripter | Set-Content -Path C:\script.sql
Get-SqlDatabase -dbname test -sqlserver server | Get-SqlStoredProcedure | Get-SqlScripter
Get-SqlDatabase -dbname test -sqlserver server | Get-SqlView | Get-SqlScripter
(ref: http://www.sqlservercentral.com/Forums/Topic1167710-1550-1.aspx#bm1168100)
Project page: http://sqlpsx.codeplex.com/
The main advantage of this approach is that it combines the configurablity / customizability of using SMO directly, with the convenience and maintainability of using a simple existing tool like the Database Publishing Wizard.
In Tools > Options > Designers > Table and Database Designers there's an option for 'Auto generate change scripts' that will generate one for every change you make at the time you save it.
You can do it with T-SQL code using the INFORMATION_SCHEMA tables.
There are also third-party tools - I like Apex SQL Script for precisely the use you are talking about. I run it completely from the command-line.
If you want to a Microsoft solution you can try: Microsoft SQL Server Database Publishing Wizard 1.1
http://www.microsoft.com/downloads/details.aspx?FamilyId=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en
It create a batch process you can run anytime you need to rebuild the scripts.
Try new SQL Server command line tools to generate T-SQL scripts and monitor Dynamic Management Views.
Worked for me like charm. It is a new python based tool from Microsoft that runs from command line.
Everything works like described on the Microsoft page (see link below)
Worked for me with SQL 2012 server.
You install it with pip:
$pip install mssql-scripter
Command parameter overview as usual with h for help:
mssql-scripter -h
Hint:
If you log in to SQL-Server via Windows authentication, just leave away Username and password.
https://cloudblogs.microsoft.com/sqlserver/2017/05/17/try-new-sql-server-command-line-tools-to-generate-t-sql-scripts-and-monitor-dynamic-management-views/
I've been using DB Comparer - Its free and no fuss script entire DB and can compare to another DB and also produce a Diff script . Excellent for Development to Production change scripts.
http://www.dbcomparer.com/
From Visual Studio 2008 SP1 TeamSuite :
In the Server Explorer / Data Connections tab, there's a publish to provider tool which does the same as "Microsoft SQL Server Database Publishing Wizard", but which is compatible with MS Sql Server 2008.
There is also this simple command line tool I build for my needs.
http://mycodepad.wordpress.com/2013/11/18/export-ms-sql-database-schema-with-c/
It can export an entire db, and it tries to export encrypted objects. Everything is stored in folders and separate sql files for easy file comparison.
Code is also available on github.
I am using VS 2012(for DBs on MSSQL Server 2008) compare database has an option to save it, the comparison and options. This is essentially what are your settings for delivery. After that you can do update or generate script.
I just find it it a little bit awkward to load it from file later(drag and drop from windows explorer) as I do not see the file in solution explorer.