How to set up different databases per environment in Play 2.0? - database

I'd like my Play app to use different databases for test, local and production (production is Heroku) environments.
In application.conf I have:
db.default.driver=org.postgresql.Driver
%dev.db.default.url="jdbc:postgresql://localhost/foobar"
%test.db.default.url="jdbc:postgresql://localhost/foobar-test"
%prod.db.default.url=${DATABASE_URL}
This doesn't seem to work. When I run play test or play run,
all DB access fails with:
Configuration error [Missing configuration [db.default.url]] (Configuration.scala:258)
I have a few questions about this:
In general, I'm a little confused about how databases are configured
in Play: it looks like there's plain db, db.[DBNAME] and db.
[DBNAME].url and different tutorials make different choices among
those. Certain expressions that seem like they should work (e.g. db.default.url = "jdbc:..." fail with an error that a string was provided where an object was expected).
I've seen other people suggest that I create separate prod.conf, dev.conf and test.conf files that each include application.conf and then contain DB-specific configuration. But in that case, how do I specify what database to use when I run test from the Play console?
Is the %env syntax supposed to work in Play 2?
What's the correct way to specify an environment for play test to use?

In Play 2 there aren't different config environments. Instead you just set or override the config parameters in the conf/application.conf file. One way to do it is on the play command line, like:
play -Ddb.default.driver=org.postgresql.Driver -Ddb.default.url=$DATABASE_URL ~run
You can also tell Play to use a different config file:
play -Dconfig.file=conf/prod.conf ~run
For an example Procfile for Heroku, see:
https://github.com/jamesward/play2bars/blob/scala-anorm/Procfile
More details in the Play Docs:
http://www.playframework.org/documentation/2.0/Configuration

At least in Play 2.1.1 it is possibly to override configuration values with environment variables, if they are set. (For details see: http://www.playframework.com/documentation/2.1.1/ProductionConfiguration)
So you can set the following in your conf/application.conf:
db.default.url="jdbc:mysql://localhost:3306/my-db-name"
db.default.url=${?DATABASE_URL_DB}
per default it will use the JDBC-URL defined unless the environment variable DATABASE_URL_DB defines a value for it.
So you just set your development database in the configuration and for production or stages you define the environment variable.
But beware, this substitution does NOT WORK if you put your variable reference inside quoted strings:
db.default.url="jdbc:${?DATABASE_URL_DB}"
Instead, just unquote the section to be substituted, for example.
database_host = "localhost"
database_host = ${?ENV_DATABASE_HOST}
db.default.url="jdbc:mysql://"${?database_host}":3306/my-db-name"
In this example, localhost will be used by default if the environment variable ENV_DATABASE_HOST is not set. (For details see: https://www.playframework.com/documentation/2.5.x/ConfigFile#substitutions)

You can actually still use the Play 1.0 config value naming method, in Play 2, if you, when you load config values, check if Play.isTest, and then prefix the properties you load with 'test.'. Here's a snipped:
def configPrefix = if (play.api.Play.isTest) "test." else ""
def configStr(path: String) =
Play.configuration.getString(configPrefix + path) getOrElse
die(s"Config value missing: $configPrefix$path")
new RelDb(
server = configStr("pgsql.server"),
port = configStr("pgsql.port"),
database = configStr("pgsql.database"),
user = ...,
password = ...)
And the related config snippet:
pgsql.server="192.168.0.123"
pgsql.port="5432"
pgsql.database="prod"
...
test.pgsql.server="192.168.0.123"
test.pgsql.port="5432"
test.pgsql.database="test"
...
Now you don't need to remember setting any system properties when you run your e2e test suite, and you won't accidentally connect to the prod database.
I suppose that you can optionally place the test. values in a separate file, which you would then include at the end of the main config file I think.

There is another approach which is to override Global / GlobalSettings method onLoadConfig and from there you can setup application configuration with generic config and specific environment configuration like below...
conf/application.conf --> configurations common for all environment
conf/dev/application.conf --> configurations for development environment
conf/test/application.conf --> configurations for testing environment
conf/prod/application.conf --> configurations for production environment
You can check http://bit.ly/1AiZvX5 for my sample implementation.
Hope this helps.

Off-topic but if you follow 12-factor-app then having separate configurations named after environments is bad:
Another aspect of config management is grouping. Sometimes apps batch config into named groups (often called “environments”) named after specific deploys, such as the development, test, and production environments in Rails. This method does not scale cleanly: as more deploys of the app are created, new environment names are necessary, such as staging or qa. As the project grows further, developers may add their own special environments like joes-staging, resulting in a combinatorial explosion of config which makes managing deploys of the app very brittle
source: http://12factor.net/config

Related

Different branding for same codebase

Im in a project where we will create different sites using the same codebase.
I would like to have a brand style and config for each site which I specify somehow in my build process.
Anyone have an idea of the best way to achieve this ?
I would treat the different sites in much the same way I'd treat different environment (dev, test, prod). If there aren't a lot of changes, just use environment variables on each server where the site will run that define which site it is. Your code can then conditionally do things (e.g. add a class site-x to the body for styling).
You can use something like dotenv to make setting environment vars easier (remember Windows does it differently to *NIX) if you're setting environments in a script. That way you're changing a file rather then actual environment variables when you want to test what a particular site looks like.
If there are many different config items that are different between sites then you can have multiple config files (config-site-one.js, config-site-two.js) and a central config.js file that returns the correct config based on some environment variable like MY_SITE_NAME.
However if you actually want to package up the site to 'send' somewhere (?) then you could run your build command with a flag like webpack blahblahblah --site=site-one.
You can use yargs to get that 'site' variable and use it in your build process however you like.

Angular Constant Best Practice

I have an angular constant which defines webservice end point
angular.module('myModule').constant('mywebservice_url', 'http://192.168.1.100')
The problem is that for dev I have a different end point while staging and production its different. Every time I try to check in to git I have to manually reset this file.
Is there any way git permenantly ignore this file but checks out the file while clone or checkout?
Is there any way I can make angular pickup file dynamically from something like environment variable.
NOTE: I don't want to depend on server to do this, ie I don't want to use apach SSI or any of those technologies as it will work only one set of servers.
Delaying the injection via backend processing. I usually just create a global object on html page called pageSettings which values like this is getting injected from the backend, i.e. environment variables, etc. and just pass that global pageSettings object into that angular constant or value.
Build system injection. If you don't have a backend, i.e. pure SPA... maybe you can put this inside your build system, i.e. create multiple task for building the different environments in gulp or grunt and replace that value during the build process.
In e.a. your app init code:
var x = location.hostname;
Then define 2 different constants.
One based off the domain name of your develop environment and one for your production.

Defer connection to datasources defined in application.conf

Say I have a database called "awesome" which is located on a live server and at the same time duplicated on a staging server for testing. My web app is based on Play 2.1.1 using Scala.
So I have these datasources defined in my application.conf file:
db.awesome-test.driver= com.mysql.jdbc.Driver
db.awesome-test.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome-test.user=mr_awesome_tester
db.awesome-test.password=justtesting
db.awesome-live.driver= com.mysql.jdbc.Driver
db.awesome-live.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome-live.user=mr_awesome
db.awesome-live.password=omgthisisawesome
Depending on what environment I am on, I would like to use either DB.withConnection("awesome-test") or DB.withConnection("awesome-live"). I am controlling this via another value in my config; so I e.g. put environment=awesome-live in there and then get the respective connection string via Play.configuration.
Now, the problem is that apparently play attempts to create a DB connection to each datasource defined in the config right away. A) This fails depending on which environment I am on. E.g. on the staging machine I will get something like this (pic is only a mock-up of course) because the live DB is not reachable:
...although it is completely unnecessary to try to connect to that DB, because it will never be used in this environment. B) Even if the connection would work, of course it would not be feasable to create two connections (live and testing) when only one of the two is ever needed.
Is there a way to tell Play to defer/postpone creation of the DB connection until it is actually needed (e.g. when DB.getConnection("...") or DB.withConnection("...") or something is called for that datasource)?
I am thinking something like db.awesome-live.deferCreation=true.
Cheers, Alex
I'd say that you have two ways of doing this.
Everything is explained at the Play! Documentation: Additional configuration
Specifying alternative configuration file
test.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome.user=mr_awesome_tester
db.awesome.password=justtesting
live.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome.user=mr_awesome
db.awesome.password=omgthisisawesome
In code you always use DB.withConnection("awesome").
Start the application with
$ start -Dconfig.resource=test.conf
or
$ start -Dconfig.resource=live.conf
Overriding specific configuration keys
In your case that means:
$ start -Ddb.awesome-live.deferCreation=true

How to determine at runtime if I am connected to production database?

OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address

How does Google App Engine dev_appserver.py serve fresh content without restarting?

One of the things I like about Google App Engine development environment is, I don't have to restart the server every time I make changes to any python source files, other static files or even configuration files. It has spoilt me and I forget to restart the server when I am working with other server environments (tornadoweb, web.py, node.js).
Can anyone explain how GAE does that? How difficult is it to make other servers (at least python based) to achieve the same thing?
You can view the source for dev_appserver.py(link). Looks like ModuleManager makes a copy of sys.modules and monitors each module to track changes based on time:
class ModuleManager(object):
"""Manages loaded modules in the runtime.
Responsible for monitoring and reporting about file modification times.
Modules can be loaded from source or precompiled byte-code files. When a
file has source code, the ModuleManager monitors the modification time of
the source file even if the module itself is loaded from byte-code.
"""
http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/dev_appserver.py#3636
Lot of webservers like GAE, make use of python reload module to see the effect in code change without restarting the server process
import something
if is_changed(something)
somthing = reload(something)
Quote from python docs:
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called a second time.
As with all other objects in Python the old objects are only reclaimed after their reference counts drop to zero.
The names in the module namespace are updated to point to any new or changed objects.
Other references to the old objects (such as names external to the module) are not rebound to refer to the new objects and must be updated in each namespace where they occur if that is desired.

Resources