sqlpackage.exe: any way to ignore users on export? - sql-server

I'm trying to create a bacpac file to export my databases to azure.
Is there anyway of making it ignore the users, while creating an export package (sqlpackage /a:Export) ?

No, unfortunately, there is no option to ignore users on export.
You could, in the alternative, produce a dacpac file with data (sqlpackage /a:extract /p:ExtractAllTableData=true) and ignore users when publishing... but that would only work for a pristine database, because dacpacs skip most Azure import niceties (like publishing stored procedures first to take advantage of deferred name resolution).
I'm guessing that you would like to ignore users because your database contains some users mapped to Windows logins and you'd like to avoid producing a new version of the database that contains only Azure SQL DB-compatible objects. If so, you might be interested in trying the private preview of the Azure SQL Database migration service: https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/25/get-to-cloud-faster-with-a-new-database-migration-service-private-preview/

This functionality is intended to support the process of migrating databases to Azure. It therefore the reasoning goes that it only supports Database Artefacts that are supported by Azure.
Windows Users, File groups and some hints (nolock without a WITH springs to mind) are not allowed.
One workflow that worked for me can be outlined as follows.
Export a Database Application Project of your database (dacpac file).
Import the same into Visual Studio (Version 2013 at least).
Change the Target Platform to Azure SQL Database (I am unsure about the V12 appendage)
Fix all the errors and warnings.
Optionally consider scripting to delete any data that could be removed at this point. You may also want to produce a quite small version for iterative testing.
Restore a backup (from a good old fashioned BAK file) of your Database to a different location.
Publish the dacpac of your Azure-corrected Project onto the restored DB.
Generate a Bacpac from the same.
Once you have a bacpac file, you should be able to promote it to the cloud
Obviously this will be an iterative learning process and some of the steps will require a number of tries to get right.

Related

dacpac file Publish error to LocalDB: "The element cannot be deployed. This element contains state that cannot be recreated in the target database."

Pretty simple you would think, but as I cannot edit the schema in this database I have no idea how to get past this error when I am publishing my dacpac file to my local database. I am trying to take a copy of a database that is hosted in Azure and have it locally for my own development purposes. I am not a sysadmin of the database, but I have complete access to it other than that. It is a production database so I can't mess anything up for obvious reasons.
I had a hell of a time even getting this dacpac file created in the first place. I was getting far more errors/warnings when trying to export as a bacpac file with data (which is what I really want to do, but I can worry about that later).
Here is the command I am trying:
SqlPackage.exe /Action:Publish /SourceFile:" C:\Data\opkCore.dacpac" /TargetConnectionString:"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=opkCore; Integrated Security=true;"
This is what I used to create the dacpac file:
sqlpackage.exe /Action:Export /ssn:tcp:<MyDatabase>.database.windows.net,1433 /sdn:opkCore /su:<MyUserName> /sp:<MyPassword> /tf:C:\Data\opkCore.bacpac
I have tried other solutions such as:
Export Data-tier Application, but I am limited to only doing it in an Azure container and I am not in control of that. It is a Pay-As-You-Go model which does not support blob storage apparently
Copy Database only works for 2005 and earlier and this is SQL 2019
Deploy Database to Microsoft SQL Server Azure SQL Database
Import Data-tier Application, same problem as #1
Exporting bacpac file using SqlPackage.exe - Errors all over the place that I cannot fix. The database is not mine to mess up
I CAN export tables one at a time, but then I am missing certain bits of schema that work together, so I get errors there also.
I really should be able to just get a local copy of the database in the EXACT same state that it currently is on our production server. Any other ideas for me on how I can do this that will ignore problems with the database and just get me a local copy the EXACT way the database is in production? 3rd party tools that do this or anything?
I decided to just script all the tables and run the script on my new DB. There were a lot of errors, but it did what it could which was 99.99% of the database schema and that is good enough for my purpose. Maybe I will try to get the data exported and imported as well.
EDIT: To export the data I just used SSMS Export data and the destination used was the new LocalDB database I just created from the scripts.

What happens when we publish the database project through visual studio

I have been working on a project which has a database project in it and I used to publish that database when ever I made some changes to the scripts. Now that I noticed that when I publish the database project it builds first and creates a dacpac file and then it publishes after I selects the target database. I am interested in knowing what role does that dacpac file plays in publishing the sql database.
Also I have found this thing when I was trying to read about pro's and con's about dacpac. Is it really works like that?
Link
The biggest problem with DACPACs has to do with the way a data-tier application is released to push version changes from the DAC into SQL Server. This is done by creating a new database with a temporary name, generating the new objects in the database, and then moving all the data from the existing database to the new one. After all the data has been transferred and the post-release scripts run, the existing database is dropped and the new database is given the correct name.
The dacpac file is the compiled build output of the database project. It's analogous to a .dll file built from a C# class library project. All of the information you defined in your database project about your database is stored in the dacpac file, along with information about the relationships between the objects.
When a dacpac file is published, the target database is compared to the dacpac and the tool will figure out what T-SQL to execute to make the target database match the dacpac's definition.
Regarding the article, note that the Data-Tier Application Framework that shipped with SQL Server 2008 R2 was largely rewritten/replaced for SQL Server 2012, so that article, while correct regarding that very old version of the Data-Tier Application Framework, is not correct regarding the tools available today.
The DACPAC file is a Zip file contains an XML representation of your database schema. It does not contain any table data (unless you provide pre-and-post deployment scripts). More information is available here: https://www.simple-talk.com/sql/database-delivery/microsoft-and-database-lifecycle-management-(dlm)-the-dacpac/
When a DACPAC is deployed, the receiving server compares the difference between the current schema and then updates your schema accordingly by generating a change script. However, be careful, as some changes can be very expensive (such as adding a new column in the middle of a table that already has millions of rows).
The article I linked to shows you how you can view the generated change script and see what happens. Repeated here is a snippet that does it:
"%ProgramFiles(x86)%\Microsoft SQL Server"\110\DAC\bin\sqlpackage.exe
/Action:Script
/SourceFile:MyPathAndFileToTheDacPac
/TargetConnectionString:"Server=MyTargetInstance;Database=MyTargetDatabase;Integrated Security=SSPI;"
/OutPutPath:"MyPathAndFile.sql"
Using DACPACs and Database Projects (in SSDT, but do not use SQL Server Management Studio) is the preferred way of pushing database changes now as it is less error-prone than manually redesigning tables using the table designer (which will drop-recreate-and-repopulate tables if you do things like add non-terminal columns to existing tables).
I'm not too familiar with it but played around with some database uploads myself. From what I gathered the dacpac has settings that can be used and uploaded. I found these instructions:
•To create a database project based on a dacpac, create a new SQL Server Database Project in Visual Studio. Then right-click on the project in Solution Explorer and choose "Import -> Data-tier Application (*.dacpac)" and select your dacpac. That will convert the contents of the dacpac into scripts in the project, and if you choose "Import database settings" the database options will be set based on the settings in the dacpac.
Dacpac is A data-tier application (DAC) is a logical database management entity that defines all of the SQL Server objects - like tables, views, and instance objects, including logins – associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC. from https://msdn.microsoft.com/en-us/library/ee210546.aspx
hope this helps...

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

DB Designer in Visual Studio 2010

I need to create an entirely new Sql Server 2008 database and want to use a Database Project in Visual Studio 2010 (Ultimate). I've created the project and added a table under the dbo schema.
The table .sql is shown only as plain text, though with colors. It has no designer, no Add Column, and no autocomplete. Existing column's properties are grayed out.
Usually, I use DB Project for nothing more than storing .sql files for source control purposes, but I'm assuming it can help me with designing the DB. Currently, it offers no such help and I think it's because I'm doing something wrong. Perhaps I need to deploy the DB to server first, or something of the such. I've looked for a Getting Started guide, but all guides I found start from importing an existing database.
Please help my understand what a DB Project can do for me and how.
Thanks,
Asaf
The whole idea of the VSTS DB is to get you set on the right path, ie. store database object definitions as .sql files, not as some fancy diagram. Any modification you do to the objects you do it by modifying the SQL definition. This way you get to do any modification to the objects, as permitted by the DDL syntax, as opposed to whatever the visual-designer-du-jour thinks you can and can't do. Not to mention the plethora of SQL code generation bugs associated with all designers out there.
The closes to a visual view is the Schema View, which shows tables, columns, indexes etc in a tree view and you can see the properties from there.
By focusing the development process and the Visual Studio project on the .sql source files, teams can cooperate on the database design using tried and tested source control methods (check-out/check-in, lock file, conflict detection and merge integration, branching etc).
the deliverable of a VSTS DB project is a the .dbschema file, which can be deployed on any server via the vsdbcmd tool. This is an intelligent deployment that does a a schema synchronization (merge of new object, modifies existing ones) and can detect and prevent data loss during deployment. By contrast, the 'classical' way of doing it (from VS Server eExplorer, or from SSMS) the deliverable was the MDF file itself, the database. This poses huge problems at deployment. The deployment of v1 is really smooth (just copy the MDF, done), but as soon as you want to release v1.1 you're stuck: you have a new MDF, but the production is running on its own MDF and does not want to replace it with yours, since it means data loss. Now you turn around and wish you have some sort of database schema version deployment story, and this is what VSTS DB does for you from day 0.
You might be better off downloading the SQL Server Management Studio for SQL Server 2008 Express - http://www.microsoft.com/downloads/details.aspx?FamilyId=C243A5AE-4BD1-4E3D-94B8-5A0F62BF7796
Using this tool you can create your database using the visual tools provided by that software. You can run your .sql script to build up the database and then visually adjust columns settings, table relationships, etc.
Once you have your database designed open up Visual Studio and open a connection to this database using the Server Explorer.
Visual Studio is ok for simple tweaks and changes to an existing database structure but for anything serious like making the database from scratch I would recommend using the Management Studio. It's free and built for that exact purpose :)

Resources