pass SQLCMD variables to dbDacFx provider with msdeploy - sql-server

I'm currently using msdeploy's dbDacFx provider to deploy a .dacpac to a database. The dacpac expects three SQLCMD variables. The syntax I am using looks like this:
-setParam:kind=SqlCommandVariable,scope=Database.dacpac,match=foo1,value="foo1 value"
I've been trying everything I can find but unfortunately there is next to no documentation around this process. The output I am getting from msdeploy says Missing values for the following SqlCmd variables: foo1 foo2 foo3.
If anybody can spot what I'm doing wrong that would be fantastic. If anybody knows where to get some documentation that would be fantastic as well. I would gladly accept any answers that say what I should do, but I would love to understand what all of these values are and WHY I'm doing it wrong.
Edit:
At this point it would appear that such an option does not exist for dbDacFx. Our use case for this feature is attempting to deploy the same template database (managed in visual studio sql projects) to 70+ databases, which we would like to do in parallel. dbSqlPackage (besides being deprecated) is not thread safe and does not allow for parallel deployments. dbDacFx overcomes this shortcoming however it cannot (in my experimentation) be passed SqlCmdVariables that reside at the project level, and it cannot use publish profiles like dbSqlPackage can. I'm in contact with a member of the web deploy team and will post any updates if I am able to figure out how to overcome any of these shortcomings.

After emailing some Microsoft employees from the MSDeploy team.... it turns out this is not possible at this time, but may be considered for a future release.

Related

SSIS - best practices for connection managers -- compose out of parameters?

I've worked a lot with Pentaho PDI so some obvious things jump out at me.
I'll call Connection Managers "CMs" from here on out.
Obvious, Project CMs > Package CMs, for extensability/ re-usability. Seems a rare case indeed where you need a Package-level CM.
But I'm wondering another best practice. Should each Project CM itself be composed of variables? (or parameters I guess).
Let's talk in concrete terms. There are specific database sources. Let's call two of them in use Finance2000 and ETL_Log_db. These have specific connection strings (password, source, etc).
Now if you have 50 packages pulling from Finance2000 and also using ETL_Log_db ... well ... what happens if the databases change? (host, name, user, password?)
Say it's now Finance3000.
Well I guess you can go into Finance2000 and change the source, specs, and even the name itself --- everything should work then, right?
Or should you simply build a project level database called "FinanceX" or whatever and make it comprised of parameters so the connectoin string is something like #Source + # credentials + # whatever?
Or is that simply redundant?
I can see one benefit of the parameter method is that you can change the "logging database" on the fly even within the package itself during execution, instead of passing parameters merely at runtime. I think. I don't know. I don't have a mountain of experience with SSIS yet.
SSIS, starting from version 2012, has SSIS Catalog DB. You can create all your 50 packages in one Project, and all these packages share the same Project Connection Managers.
Then you deploy this Project into the SSIS Catalog; the Project automatically exposes Connection Manager parameters with CM prefix. The CM parameters are parts of the Connection Manager definition.
In the SSIS Catalog you can create so called Environments. In the Environment you define variables with name and datatype, and store its value.
Then - the most interesting part - you can associate the Environment and the uploaded Project. This allows you to bind project parameter with environment variable.
At Package Execution - you have to specify which Environment to use when specifying Connection Strings. Yes, you can have several Environments in the Catalog, and choose when starting Package.
Cool, isn't it?
Moreover, passwords are stored encrypted, so none can copy it. Values of these Environment Variables can be configured by support engineers who has no knowledge of SSIS packages.
More Info on SSIS Catalog and Environments from MS Docs.
I'll give my fair share of experience.
I recently had a similar experience at work, our 2 main databases name's changed, and i had no issue, or downtime on the schedules.
The model we use is not the best, but for this, and for other reasons, it is quite confortable to work with. We use BAT files to pass named parameters into a "Master" Job, and basically depending on 2 parameters, the Job runs on an alternate Database/Host.
The model we use is, in every KTR/KJB we use a variable ${host} and ${dbname}, these parameters are passed with each BAT file. So when we had to change the names of the hosts and databases, it was a simple Replace All Text Match in NotePad++, and done, 2.000+ BAT Files fixed, and no downtime.
Having a variable for the Host/DB Name for both Client Connection and Logging Connection lets you have that flexibility when things change radically.
You can also use the kettle.properties file for the logging connection.

Work with Database using Spock and Geb.

I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.

How to determine at runtime if I am connected to production database?

OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address

Visual Studio 2010: Is there a way to deploy a single database object that retains the object in normal builds?

Developers have expressed a desire to deploy a single database object from a Sql Server 2008 project, such as a stored procedure, without having to use the build/deploy or schema comparison functions.
To enable this, developers have created their database object scripts including 'if exists.. drop' checks at the top of the script and have included the grant statements for the objects in their scripts.
This results in build errors that then prevent the build/deploy or schema compare functions from operating. So then, developers mark the object as "not in build" but then the object can't be deployed at all using build/deploy or schema compare.
Has anyone found a way of quickly deploying a single database object from visual studio that does not involve schema compare or build/deploy which does not remove the object from the normal build process? Manual copy/paste is not an option but scripting/macros which effectively do the same would be viable.
SQL Server Data Tools (SSDT) now provides this functionality by way of comparison. Individual differences identified in the comparison may be deployed. We have found that during development, publishing tends to result in overlaying the simultaneous changes that other developers are making on the shared development database server. The comparison tool has been working fairly well, except for a very annoying crash issue that occurs when the comparison window is closed. We are using 32bit Vista with 3GB of RAM and VS 2010. This issue may not occur in other configurations.
First I'd like to comment that you seem to be fighting the intended paradigm with regards database projects.
If your project is deployed somewhere [1],
then there should be a corresponding branch / label [2] in your source repository.
If you change a single object [delta] from the above baseline [2], and build the new version [3],
then when you deploy [3] to [1], the deployment script should find only one difference, and the net effect of the change [delta] is all that will be applied.
So, in theory there's no reason not to just simply build and deploy.
TIP: When taking on a new paradigm, it pays to embrace it fully; partial adoption tends to cause its own set of problems.
However, that said, I may be able to help.
We have a similar need because actual deployment is out of our control. We control only part of the database, and have to provide our changes to another team for review.
We have to provide individual 'self-contained' scripts for each object with if exists..drop at the drop and grant permission at the bottom.
However, we want the other benefits of working with the database project, and then simply copy out the individual script files when we "deploy".
The solution we came up with was to place the extra "bits" in a comment block as follows:
/*SINGLE_OBJECT_DEPLOYMENT
if exists (...)
DROP ...
--*/
--/*SINGLE_OBJECT_DEPLOYMENT
if exists (...)
DROP ...
--*/
Note that a simple search and replace of /*SINGLE_OBJECT_DEPLOYMENT with --/*SINGLE_OBJECT_DEPLOYMENT enables the commented out code, so it can be easily put into a macro, or other semi-automated process.

How to keep Stored Procedures and other scripts in SVN/Other repository?

Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.

Resources