Can I manage database using database project without knowing connection string? - sql-server

I have created database project. I am able to upgrade my changes in my sql server. Now I have deploy the same changes on another environment. Also I dont want to change my previous data. I dont have to access that Sql server so I don`t know the connection string.
I have some options, like to deploy the .dacpac file or .sql script, but it first delete the database then creates new one. So that I loosing my data.
Please help me. If any option is there?

The options I see for this are:
Ask for a backup (or extract schema tables using task-->Generate scripts ion ssms) - restore this somewhere and use sqlpackage to generate a deployment script you can ask them to run
Ask them to run sqlpackage.exe and either generate a script or run it directly
Ask them for permissions so you can do it
If the database is being deleted then you have the option "CreateNewDatabase" set to true which would be bad in a production environment so remove it or set it to false!
If they run it or you ask for permissions, these are the minimum permissions you need to generate a script (to run the script you will probably need dbo):
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT (my blog)

Related

Access Build variables inside Pre and Post deployment scripts SSDT

I have just set up an SSDT project which I want to use to create local databases on the SQL server hosted locally on my machine.
I want to add some pre- and post- deployment SQL scripts for initialization and cleanups.
Since, the server and the database name can change, I have defined two build variables using the project properties each for the target server and target database.
However, I can't seem to access them inside the post deployment scripts.
The syntax below won't build the project -
use [$(TargetDatabaseName)]
This builds, but then fails while publishing -
use ['$(TargetDatabaseName)']
and the error says the ''myTargetDB'' doesn't exist (myTargetDB was passed as a value at the time of publishing)
This might be a trivial thing but I am just not able to get around it. I am on SQL server 2016 if that matters.
Make sure that you put both scripts in SQLCMD mode. See the image below surrounding with red.
Once your target variable is defined, see surrounding with blue in the image above, it can be safely used in the PostDeployment script, see the image below surrounding with blue.
If you have any questions, feel free to contact me.
There is a predefined variable, $(DatabaseName), for the name of the target database. You don't need to create your own; and even if you do, you would need to set the same value to both of them.
Not sure about the target server. In most cases, SQL scripts are generated with the assumption that connection to the correct server is already established. Sure, you can change the current server using something like :connect $(TargetServerName), but I think it will only lead to confusion (and I'm not sure it will work, actually).
The only exception I can think of is that you can't use SQLCMD variables to parameterise the logical/physical names of the database files - these have to be hardcoded.
All other variables, if declared in the project properties, should be accessible everywhere. Below is a fragment of a post deploy from one of my projects:
use [master];
go
print 'Switching database ownership to sa...';
GO
alter authorization on database::[$(DatabaseName)] to [sa];
go
use [$(DatabaseName)];
go
print 'Creating database master key...';
go
-- Create database master key
create master key encryption by password = '$(DMK_Key)';
go
print 'Running database setup...';
go
exec dbo.init_database;
go
It is possible, however, that you are trying to reference another database, located on a different server. If that's the case, you need to follow a different approach, namely: built a project for that remote database and add its DACPAC to the list of project references, using the Add database reference... menu. There, you will be able to specify variables for both the (linked) server and the database name.

How to create database using Database Project in Visual Studio if doesn't exists?

I have a Database project for my personal project and I am trying to deploy my code to my DEV server. I frequently delete and re-create my DEV Server. Right now, DEV Server is newly created with SQL Server. Every time I want to deploy my code I have to manually create Database Project and then publish database project. I want to automate creation of Database with database project deployment.
Right now, I have a script that creates database, but I have to execute it manually. And this is working perfectly but I want to automate this step as well.
Is this even possible? If yes, then how? Please explain step by step. Also what will we mention for Initial Catalog in connection string?
Edit:
I tried to create Database by using
CREATE DATABASE LocalDbTest
in Pre-Deployment Script. But it didn't work. It is creating Database, but then tables are not getting created tables under it. Since I used master database as default database, it is creating table under master. It is not letting me select LocalDbTest database as default because it is not yet created, so I have to select Master as my default database. I tried to Change Database by:
USE LocalDbTest
GO
I used it just after creating Database but this didn't work because when generating script it is changing back to default database. This part is coming automatically when generating script.
USE [$(DatabaseName)];
GO
Also Visual Studio is not letting me add database name in front of table name like:
CREATE TABLE [LocalDbTest].[dbo].[TestTable]
I am getting error:
When you create an object of this type in a database project, the object's name must contain no more than two parts.
If you have a script ready for database creation, you can use the Pre-build event to call SQLCMD and run your script.
Edit:
If you have trouble pointing to a database that does not exist, you may have to manually edit the publish profile (ex. dev.publish.xml) and set the TargetDatabaseName element explicitly. You can also set CreateNewDatabase element to True if you want to be recreated every time it gets published.
Answer:
You can use a publish profile and hardcode the target database in it.

Bitbucket and Database Development

I have a Windows server with MS SQL Server running on it.
On the SQL Server developers have created stored procedures, views, tables, triggers.
On the Windows server developers created shell scripts.
I would like to start versioning the code described above in a BitBucket repository. I have a repository created in BitBucket.
How should the branches be organized in this repository? i.e. "SQL Server\Database\\ ...
"Windows Server\\shell_script" ...
Can I connect BitBucket to SQL Server and Windows Server and specify which code needs to be versioned?
Are both 1 and 2 options above possible?
I just need to version control the changes to the code and have the ability to mark under which project the code change was made.
I am new to BitBucket. I am using the web front end of it. I do not know how to configure command line access, so please try not to reference Bitbucket commands. Sorry if I sound confusing.
Please help.
I know this is an old question but anyway, in principle I'd recommend:
Put all the server shell scripts into one place and make that a git repo linked to your bitbucket repo
Add a server shell script to export what you want version controlled from the SQL db
The export from the SQL db should be to text files so they are easily 'diffable'
You might as well make the export to a sub-directory within the shell scripts repo so that everything is in one place and can't get out of sync
So you only have one branch, not a separate one for server shell scripts and db
Make sure people run the export script and then commit everything when they make a change
You ideally have a test server which means you'd want a way to push changes from the repo into the SQL db. I presume you can do this with a script but deleting the server setup and re-creating it from the text files.
So basically, you can't connect an SQL db to bitbucket directly. You need scripts to read and write to the db from a repo.

Automating import of data-tier application (SQL database) from Azure with a Master Key

When I extract a data-tier application from a Microsoft Azure SQL database that has a Master Key, I was unable to import it into SQL server on my local PC.
You will find others had this issue here: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
However the steps provided as the answer did not work on my installation.
Steps are
1. Disable auditing on the server (or database)
2. Drop the database master key with DROP MASTER KEY command.
Microsoft Tech Support verified this solution did not work on my installation of SQL Server and after actually taking remote control of my PC and trouble shooting, they were unable to determine why this was occurring.
I needed to find a way to remove the Master Key from the bacpac file. I have a Powershell script to remove the Master Key from the BACPAC file but it requires extracting, renaming files and running scripts from Windows Powershell to get the db imported.
Does anyone have a program or set of scripts which would automate the process of removing the Master Key and importing a SQL DB from Azure with a single command?
I am new to this forum. Please do not be harsh with this post. I am trying to do the best I can to help others to save the many hours I spent coming up with this.
I have cobbled together a T-SQL script which calls a Windows Powershell script (also cobbled from multiple sources) to extract a data-tier application (database) from Microsoft Azure SQL database and import it into a database on my local SQL Server by running ONE command. Over the months I found some of the code that is in my scripts from other blogs etc. I am not able to provide the credit due to those folks as I didn't keep track of where I got the info. If you are reading this and you see your code, please take credit. I apologize for not being able to give you the credit for your work.
There may be configuration settings on your PC and your local SQL server that need adjustments as this entire solution requires pretty much full access to your computer. If you run into trouble with compatibility, let me know and I will do the best I can to let you know how my system is configured in case it will help you.
I am using Windows 10 Pro and Microsoft SQL Server Developer (64-bit) v12.0.5207.0
I have placed the two files that do all the work on GitHub here: https://github.com/Wingloader/Auto-Azure-BACPAC-Download.git
GetNewBacpac-forGitHub.sql
GetAzureDB-forGitHub.ps1
WARNING: The Powershell script file will store your SQL sa password and your Azure SQL login in clear text!
If you don't want to do this, don't use this solution.
My computer is owned and controlled solely by me so I am able to open up the security in my system and I am willing to assume the responsibility of safeguarding it.
The basic steps of my solution are are accomplished as follows: (steps 1 and 2 are optional as I like to keep a version of the DB I am working with as of the point in time I pull down a clean production copy of my Azure DB)
Back up the current DB as MyLocalDB.bak.
Restore that backup from step 1 to a new DB with the previous day stamped at the end of the DB name (e.g., MyLocalDB20171231)
Delete the original MyLocalDB database (needed so we can recreate the DB with the original name later on)
Pull down the production database from Azure and create a new database with the name MyLocalDB.
The original DB is deleted in step 3 so that the restored DB can use the original name (important when you have data connections referring to that DB name)
In Step 4, the work of extracting the data-tier application DB from Azure is initiated by this line in the T-SQL:
EXEC MASTER..xp_cmdshell '%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\Git\GetUpdatedAzureDB\GetAzureDB.ps1"'
The Powershell script does the following:
The target for the extract is a file named today.bacpac (hardcoded). The first thing to do is delete that file if it already exists.
Extract the DB from Azure into the today.bacpac file.
Note: my DB on Azure has a Master Key for encryption. This will need to be removed from the files prior to importing the bacpac file into your local DB or it will fail (this may not be required in SQL 2017 according to my previous conversations with MS Support). If you do not use a Master Key, you can either strip out the code that does this step or just leave it alone. It won't remove anything if it isn't there. It would just add a little overhead to the program.
Open the today.bacpac file (zip file) and remove the MasterKey node from the Origin.xml file.
Modify the Model.xml file to updates the SHA hash length. This is required in order for the file not to appear to have been tampered with when SQL opens the bacpac file.
Re-zips the files back into a new file today-patched.bacpac
Runs this line of code (from Powershell) to import the bacpac file into SQL Server
&C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe" /Action:Import /SourceFile:"C:\Git\GetUpdatedAzureDB\today-patched.bacpac" /TargetConnectionString:"Data Source=MyLocalSQLServer;User ID=sa; Password=MySAPassword; Initial Catalog=MyLocalDB; Integrated Security=false;"
After editing the two files to provide updated paths, usernames and passwords, run the SQL script. You do not need to edit the scripts again. You can run the SQL script again without modification and it will create a new copy of your Azure DB.
Done!

Deploying Dacpacs to an Availability Group in a locked-down production

My DBA and I are trying to work out how to effectively use Microsoft's Database projects and the Dacpacs they generate to simplify our production deployment system.
Ideally, I would be able to build and/or publish the .sqlproj, generating a .dacpac file, which can then be uploaded to the production server and used to upgrade the database from whatever version it was to the latest version. This is similar to how we're doing website deployments, where I publish to a package, and then that package is uploaded to the server and imported into IIS.
However, we can't work out how to make this work. The DBA has already created the database and added it to our Availability Groups. And every time we try to apply the Dacpac, it tries to adjust settings which it can't because of the AGs.
Nothing I've been able to do has managed to create a .dacpac file which doesn't try to impose settings on the database. The closest option I've found will exclude them when publishing, but as best as I can tell you can't publish to an inaccessible database, and only the DBA has access to the production server.
Can I actually use dacpacs this way?
There are two parts to this, firstly how do you stop deploying settings you don't want to deploy - can you give an example of one of the settings that doesn't apply?
For the second part where you do not have access to the SQL Server there are a few different ways to handle this:
Use an offline copy to generate the deploy script
Get the DBA to generate the deploy script
Get the DBA to deploy using the dacpac
Get read only access to the database
Option 1: "Use an offline copy to generate the deploy script"
You need to compare the dacpac to something and if you do not have a TDS connection (default instance default port tcp:1433) then you can use a version of the database that matches production either through:
Use log shipping to restore a copy of production somewhere you can access it
Get a development db and production in sync, then every release goes to the dev and prod databases, ensuring that they stay in sync
The log shipped copy is the easiest, if it is to a development server you can normally have server permissions to give you acesss or you can create the correct permissions at the database level but not on the production server level.
If the data is sensitive then the log shipped copy might not be appropriate so you could try to keep a development and production database in sync but this is difficult and requires that the DBA be "well trained" into not running anything that isn't first run against the db database as well.
Once you have access to a database that has exactly the same schema as the production database you can use sqlpackage.exe /action:script to generate a deploy script, in fact because it isn't the production database you can generate the script as part of your CI process :).
Option 2: "Get the DBA to generate the deploy script"
This is to get the DBA to copy the dacpac to the productions server and to use sqlpackage.exe that will be in "Program Files (x86)\Microsoft Sql Server\Version\DAC\bin" folder to compare the dacpac to the database and generate a script that he can review before deploying.
Option 3: "Get the DBA to generate the deploy script"
This is simlar to option 2 but instead of generating a script he deploys in SSMS he just use sqlpackage.exe /Action:Publish to deploy the changes directly.
Option 4: "Get read only access to the database"
This is actually my preferred as it means that you always build scripts against what is guaranteed to be the state of production (as it is production). In your case you would need to get the tcp port between your machine or ideally your build machine and the SQL Server and then you will need these permissions:
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT
As I said option 4 is always my preferred but I understand that it isn't always possible.
Option 2 + 3 are fraught with worry as you will be running scripts that haven't been tested anywhere, with option 4 and 1 you can generate the scripts and then deploy to a test / QA database as long as they themselves have the same schema as production. The scripts can also go through a code review process.
If you do option 2 / 3 then I would create a batch file or powershell script that drives sqlpackage.exe and if they deploy from a different server that doens't have sqlpackage.exe then you can copy the DAC folder to that machine and run sqlpackage from that, you do not have to actually install it (you may need to also copy in the Microsoft.SqlServer.TransactSql.ScriptDom.dll from the "Program Files (x86)\Microsoft Sql Server\Version\SDK\Assemblies" folder.
I hope this helps, if you have any more questions feel free to post here or ping me :)
ed

Resources