My connection strings are configured in Octopus Deploy through library/variables/LibraryVariableSets and all works fine based on the target environment, what I am going to do now is to create new db users for each project/app and set this UID/PWD in my connection strings in Octopus.
My question is: how can I configure this connection string per project like what I already have with the local variables for the project (app settings keys)?
P.S. Octopus version is 4.1.2
You can add variable references for the username and password to the connection string variable in Library Variable Set.
Server=myServerAddress;Database=myDataBase;User Id=#{dbuserid}; Password=#{dbpassword};
Then in each project where you are using this library variable set, create a new variables for the dbuserid and dbpassword, when the variables are evaluated for the deployment, these variable placesholders in the connection string will be updated with the values provided by the project variables.
The final value of the connection string variable during deployment:
Server=myServerAddress;Database=myDataBase;User Id=userid; Password=password;
More details:
This is what your library variable set should look like, you can add some scoping here too.
Link the Library Variable Set to the project:
Then set the user id and password in the project variables:
Then you can use the connection string in scripts or as variable replacement in your config files.
I tested it with the following powershell script
write-host $connectionstring
write-host "#{connectionstring}"
Which results in the following output:
If you need to inject the connection string into a specific position in a JSON file, you can use the JSON Configuration Variables syntax and set the value to the connection string variable reference #{connectionstring}.
Variables within library variable sets aren't able to be scoped to individual projects, as they're designed to be global across projects. There's an active UserVoice suggestion to implement this, however:
https://octopusdeploy.uservoice.com/forums/170787-general/suggestions/31206961-library-variables-ability-to-scope-to-projects
You can currently define these variables within a set, and include that set into many projects if you want to define a default value. If you define a project variable with the same name, that will be considered more "specifically scoped" and thus be used when deploying. I don't think this will be of much help with things like connection strings that are project-specific.
Related
I've worked a lot with Pentaho PDI so some obvious things jump out at me.
I'll call Connection Managers "CMs" from here on out.
Obvious, Project CMs > Package CMs, for extensability/ re-usability. Seems a rare case indeed where you need a Package-level CM.
But I'm wondering another best practice. Should each Project CM itself be composed of variables? (or parameters I guess).
Let's talk in concrete terms. There are specific database sources. Let's call two of them in use Finance2000 and ETL_Log_db. These have specific connection strings (password, source, etc).
Now if you have 50 packages pulling from Finance2000 and also using ETL_Log_db ... well ... what happens if the databases change? (host, name, user, password?)
Say it's now Finance3000.
Well I guess you can go into Finance2000 and change the source, specs, and even the name itself --- everything should work then, right?
Or should you simply build a project level database called "FinanceX" or whatever and make it comprised of parameters so the connectoin string is something like #Source + # credentials + # whatever?
Or is that simply redundant?
I can see one benefit of the parameter method is that you can change the "logging database" on the fly even within the package itself during execution, instead of passing parameters merely at runtime. I think. I don't know. I don't have a mountain of experience with SSIS yet.
SSIS, starting from version 2012, has SSIS Catalog DB. You can create all your 50 packages in one Project, and all these packages share the same Project Connection Managers.
Then you deploy this Project into the SSIS Catalog; the Project automatically exposes Connection Manager parameters with CM prefix. The CM parameters are parts of the Connection Manager definition.
In the SSIS Catalog you can create so called Environments. In the Environment you define variables with name and datatype, and store its value.
Then - the most interesting part - you can associate the Environment and the uploaded Project. This allows you to bind project parameter with environment variable.
At Package Execution - you have to specify which Environment to use when specifying Connection Strings. Yes, you can have several Environments in the Catalog, and choose when starting Package.
Cool, isn't it?
Moreover, passwords are stored encrypted, so none can copy it. Values of these Environment Variables can be configured by support engineers who has no knowledge of SSIS packages.
More Info on SSIS Catalog and Environments from MS Docs.
I'll give my fair share of experience.
I recently had a similar experience at work, our 2 main databases name's changed, and i had no issue, or downtime on the schedules.
The model we use is not the best, but for this, and for other reasons, it is quite confortable to work with. We use BAT files to pass named parameters into a "Master" Job, and basically depending on 2 parameters, the Job runs on an alternate Database/Host.
The model we use is, in every KTR/KJB we use a variable ${host} and ${dbname}, these parameters are passed with each BAT file. So when we had to change the names of the hosts and databases, it was a simple Replace All Text Match in NotePad++, and done, 2.000+ BAT Files fixed, and no downtime.
Having a variable for the Host/DB Name for both Client Connection and Logging Connection lets you have that flexibility when things change radically.
You can also use the kettle.properties file for the logging connection.
I have an angular constant which defines webservice end point
angular.module('myModule').constant('mywebservice_url', 'http://192.168.1.100')
The problem is that for dev I have a different end point while staging and production its different. Every time I try to check in to git I have to manually reset this file.
Is there any way git permenantly ignore this file but checks out the file while clone or checkout?
Is there any way I can make angular pickup file dynamically from something like environment variable.
NOTE: I don't want to depend on server to do this, ie I don't want to use apach SSI or any of those technologies as it will work only one set of servers.
Delaying the injection via backend processing. I usually just create a global object on html page called pageSettings which values like this is getting injected from the backend, i.e. environment variables, etc. and just pass that global pageSettings object into that angular constant or value.
Build system injection. If you don't have a backend, i.e. pure SPA... maybe you can put this inside your build system, i.e. create multiple task for building the different environments in gulp or grunt and replace that value during the build process.
In e.a. your app init code:
var x = location.hostname;
Then define 2 different constants.
One based off the domain name of your develop environment and one for your production.
Say I have a database called "awesome" which is located on a live server and at the same time duplicated on a staging server for testing. My web app is based on Play 2.1.1 using Scala.
So I have these datasources defined in my application.conf file:
db.awesome-test.driver= com.mysql.jdbc.Driver
db.awesome-test.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome-test.user=mr_awesome_tester
db.awesome-test.password=justtesting
db.awesome-live.driver= com.mysql.jdbc.Driver
db.awesome-live.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome-live.user=mr_awesome
db.awesome-live.password=omgthisisawesome
Depending on what environment I am on, I would like to use either DB.withConnection("awesome-test") or DB.withConnection("awesome-live"). I am controlling this via another value in my config; so I e.g. put environment=awesome-live in there and then get the respective connection string via Play.configuration.
Now, the problem is that apparently play attempts to create a DB connection to each datasource defined in the config right away. A) This fails depending on which environment I am on. E.g. on the staging machine I will get something like this (pic is only a mock-up of course) because the live DB is not reachable:
...although it is completely unnecessary to try to connect to that DB, because it will never be used in this environment. B) Even if the connection would work, of course it would not be feasable to create two connections (live and testing) when only one of the two is ever needed.
Is there a way to tell Play to defer/postpone creation of the DB connection until it is actually needed (e.g. when DB.getConnection("...") or DB.withConnection("...") or something is called for that datasource)?
I am thinking something like db.awesome-live.deferCreation=true.
Cheers, Alex
I'd say that you have two ways of doing this.
Everything is explained at the Play! Documentation: Additional configuration
Specifying alternative configuration file
test.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome.user=mr_awesome_tester
db.awesome.password=justtesting
live.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome.user=mr_awesome
db.awesome.password=omgthisisawesome
In code you always use DB.withConnection("awesome").
Start the application with
$ start -Dconfig.resource=test.conf
or
$ start -Dconfig.resource=live.conf
Overriding specific configuration keys
In your case that means:
$ start -Ddb.awesome-live.deferCreation=true
I'd like my Play app to use different databases for test, local and production (production is Heroku) environments.
In application.conf I have:
db.default.driver=org.postgresql.Driver
%dev.db.default.url="jdbc:postgresql://localhost/foobar"
%test.db.default.url="jdbc:postgresql://localhost/foobar-test"
%prod.db.default.url=${DATABASE_URL}
This doesn't seem to work. When I run play test or play run,
all DB access fails with:
Configuration error [Missing configuration [db.default.url]] (Configuration.scala:258)
I have a few questions about this:
In general, I'm a little confused about how databases are configured
in Play: it looks like there's plain db, db.[DBNAME] and db.
[DBNAME].url and different tutorials make different choices among
those. Certain expressions that seem like they should work (e.g. db.default.url = "jdbc:..." fail with an error that a string was provided where an object was expected).
I've seen other people suggest that I create separate prod.conf, dev.conf and test.conf files that each include application.conf and then contain DB-specific configuration. But in that case, how do I specify what database to use when I run test from the Play console?
Is the %env syntax supposed to work in Play 2?
What's the correct way to specify an environment for play test to use?
In Play 2 there aren't different config environments. Instead you just set or override the config parameters in the conf/application.conf file. One way to do it is on the play command line, like:
play -Ddb.default.driver=org.postgresql.Driver -Ddb.default.url=$DATABASE_URL ~run
You can also tell Play to use a different config file:
play -Dconfig.file=conf/prod.conf ~run
For an example Procfile for Heroku, see:
https://github.com/jamesward/play2bars/blob/scala-anorm/Procfile
More details in the Play Docs:
http://www.playframework.org/documentation/2.0/Configuration
At least in Play 2.1.1 it is possibly to override configuration values with environment variables, if they are set. (For details see: http://www.playframework.com/documentation/2.1.1/ProductionConfiguration)
So you can set the following in your conf/application.conf:
db.default.url="jdbc:mysql://localhost:3306/my-db-name"
db.default.url=${?DATABASE_URL_DB}
per default it will use the JDBC-URL defined unless the environment variable DATABASE_URL_DB defines a value for it.
So you just set your development database in the configuration and for production or stages you define the environment variable.
But beware, this substitution does NOT WORK if you put your variable reference inside quoted strings:
db.default.url="jdbc:${?DATABASE_URL_DB}"
Instead, just unquote the section to be substituted, for example.
database_host = "localhost"
database_host = ${?ENV_DATABASE_HOST}
db.default.url="jdbc:mysql://"${?database_host}":3306/my-db-name"
In this example, localhost will be used by default if the environment variable ENV_DATABASE_HOST is not set. (For details see: https://www.playframework.com/documentation/2.5.x/ConfigFile#substitutions)
You can actually still use the Play 1.0 config value naming method, in Play 2, if you, when you load config values, check if Play.isTest, and then prefix the properties you load with 'test.'. Here's a snipped:
def configPrefix = if (play.api.Play.isTest) "test." else ""
def configStr(path: String) =
Play.configuration.getString(configPrefix + path) getOrElse
die(s"Config value missing: $configPrefix$path")
new RelDb(
server = configStr("pgsql.server"),
port = configStr("pgsql.port"),
database = configStr("pgsql.database"),
user = ...,
password = ...)
And the related config snippet:
pgsql.server="192.168.0.123"
pgsql.port="5432"
pgsql.database="prod"
...
test.pgsql.server="192.168.0.123"
test.pgsql.port="5432"
test.pgsql.database="test"
...
Now you don't need to remember setting any system properties when you run your e2e test suite, and you won't accidentally connect to the prod database.
I suppose that you can optionally place the test. values in a separate file, which you would then include at the end of the main config file I think.
There is another approach which is to override Global / GlobalSettings method onLoadConfig and from there you can setup application configuration with generic config and specific environment configuration like below...
conf/application.conf --> configurations common for all environment
conf/dev/application.conf --> configurations for development environment
conf/test/application.conf --> configurations for testing environment
conf/prod/application.conf --> configurations for production environment
You can check http://bit.ly/1AiZvX5 for my sample implementation.
Hope this helps.
Off-topic but if you follow 12-factor-app then having separate configurations named after environments is bad:
Another aspect of config management is grouping. Sometimes apps batch config into named groups (often called “environments”) named after specific deploys, such as the development, test, and production environments in Rails. This method does not scale cleanly: as more deploys of the app are created, new environment names are necessary, such as staging or qa. As the project grows further, developers may add their own special environments like joes-staging, resulting in a combinatorial explosion of config which makes managing deploys of the app very brittle
source: http://12factor.net/config
We are starting a new WinForms project and decided to use TeamCity to create builds and run unit and integration tests. The project deals with database. We have 3 databases (developDB (this is used by developers while developing =) ), testDB (this is used by teamcity to run tests) and productionDB(this is used by client)). TeamCity has 3 buildConfiguration. The first is triggered when commit happens. The second is triggered every night to run integration tests. And the third is triggered by developer when we what to make a release. So I want TeamCity to be able to change connectionString depending on what kind of build happens. Also I don't want to store connectionString in app.config (I don't want client to know the user and password). What options are available to perform the task?
Thanks in advance!
Updated
I use NHibernate and FluentNHibernate to connect to databases if it matters.
In this situation, I would use TeamCity to run a nant script to perform the build.
NAnt allows you to modify config file values (such as your connection string) at build time.
An example of using TeamCity/NAnt to deploy to different staging environments can be found at this blog post:
http://thecodedecanter.wordpress.com/2010/03/25/one-click-website-deployment-using-teamcity-nant-git-and-powershell/
As #surfen suggests, the connection string values for each environment should be encrypted to prevent credentials from being stored in plain text.
I have not used TeamCity, but I have written multiple applications with dynamically changing ConnectionStrings during logon process (ie. at runtime), and It's quite simple.
You didn't tell how do you connect to your Database. Since you mention app.config, I suppose it is ADO.NET DataSets or simmilar technology, which creates a read-only(getter) ConnectionString in your Settings.Designer.cs / app.config.
What I did, was to create a setter method in Settings.cs (not Settings.Designer.cs) for the ConnectionString property like this:
public void setNorthwindConnectionString(String value) {
this["NorthwindConnectionString"] = value;
}
My generated DataSet then uses this NorthwindConnectionString for accessing data.
You can use preprocessor directives for conditional setup of your ConnectionString:
#if DEBUG
Console.WriteLine("Mode=Debug");
Settings.Default.setNorthwindConnectionString("(DebugDBConnectionString)");
#else
Console.WriteLine("Mode=Release");
Settings.Default.setNorthwindConnectionString("(ReleaseDBConnectionString)");
#endif
You could also encrypt your connection strings, and copy the right app.config during post build event.
I am assuming you would be using msbuild to build your projects in Team city. If that is the case, then you can send the Conditional Compilation Symbol where in you can pass what ever symbols you need.
Once you have the symols, you can do things like:
#if DEVBUILD
//.... Your Connection String Code here
#endif
#if INTBUILD
.... Your Connection String Code here
#endif
That's the answer to your frst question.
Looking at the second part of your question, where in you do not want to store the user name & password in the app.config,
Options:
try intergrated security, it will use your domain account
if option cannot be used, try keeping your connection string as a Registry Key, so that its not obvious or an Environment variable.