Defer connection to datasources defined in application.conf - database

Say I have a database called "awesome" which is located on a live server and at the same time duplicated on a staging server for testing. My web app is based on Play 2.1.1 using Scala.
So I have these datasources defined in my application.conf file:
db.awesome-test.driver= com.mysql.jdbc.Driver
db.awesome-test.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome-test.user=mr_awesome_tester
db.awesome-test.password=justtesting
db.awesome-live.driver= com.mysql.jdbc.Driver
db.awesome-live.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome-live.user=mr_awesome
db.awesome-live.password=omgthisisawesome
Depending on what environment I am on, I would like to use either DB.withConnection("awesome-test") or DB.withConnection("awesome-live"). I am controlling this via another value in my config; so I e.g. put environment=awesome-live in there and then get the respective connection string via Play.configuration.
Now, the problem is that apparently play attempts to create a DB connection to each datasource defined in the config right away. A) This fails depending on which environment I am on. E.g. on the staging machine I will get something like this (pic is only a mock-up of course) because the live DB is not reachable:
...although it is completely unnecessary to try to connect to that DB, because it will never be used in this environment. B) Even if the connection would work, of course it would not be feasable to create two connections (live and testing) when only one of the two is ever needed.
Is there a way to tell Play to defer/postpone creation of the DB connection until it is actually needed (e.g. when DB.getConnection("...") or DB.withConnection("...") or something is called for that datasource)?
I am thinking something like db.awesome-live.deferCreation=true.
Cheers, Alex

I'd say that you have two ways of doing this.
Everything is explained at the Play! Documentation: Additional configuration
Specifying alternative configuration file
test.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome.user=mr_awesome_tester
db.awesome.password=justtesting
live.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome.user=mr_awesome
db.awesome.password=omgthisisawesome
In code you always use DB.withConnection("awesome").
Start the application with
$ start -Dconfig.resource=test.conf
or
$ start -Dconfig.resource=live.conf
Overriding specific configuration keys
In your case that means:
$ start -Ddb.awesome-live.deferCreation=true

Related

connect to a database server without exposing a network share

this is quite simple to do in many databases, but I have not yet found a way to achieve this with Advantage in Server-mode over the network.
assume 2 PCs:
SERVER: running Advantage Database Server, and contains A database
CLIENT: contains a simple application, or even just Advantage Architect.
If the folder containing this database was shared via the OS (network share, with read/write permissions), then establishing connection is straight-forward.
I am however, precisely trying to avoid exposing a network share.
In Firebird, for example, this can be done using connection path:
SYSDBA#SERVER:C:\SomePrivateFolder\myapp.FDB
Isn't this the reason for exposing a port for the database (6262)?
What's interesting is that they offer something called "internet" connection. I highly doubt they would require a network share over the internet to access the database.
So, is this doable, and if so, would love a hint.
Thanks!
Edit:
following the answer below, adding more details.
SERVER contains 2 folders, each one with its ADV Dictionary:
C:\Data\mydata.add (not a shared folder)
C:\DataShared\mydata.add (shared folder)
I am able to connect to the second one using the connect path \\SERVER:6262\DataShared\mydata.add
to connect to the first one i've tried:
\\SERVER:6262\C:\Data\mydata.add
\\SERVER:6262\Data\mydata.add
\\SERVER:6262:C:\Data\mydata.add
none of which worked.
Note that I am not calling the stored procedure directly, but using the Delphi ADS components, which certainly internally call that same stored procedure.
I am certainly connecting as Remote (have the ADS Server launched on SERVER). For the other parameters, I am using TCP/IP as comm. type, and default ADSSYS / blank password.
with this setup in mind, what would the path be to connect to C:\Data\mydata.add on \\SERVER?
Thanks again
No need to expose your database on a shared folder. You'd only do that if using the LOCAL connection. If using INTERNET or REMOTE, then simply connect using API AdsConnect60(). Look it up on the help file.
UNSIGNED32 AdsConnect60( UNSIGNED8 *pucConnectPath,
UNSIGNED16 usServerTypes,
UNSIGNED8 *pucUserName,
UNSIGNED8 *pucPassword,
UNSIGNED32 ulOptions,
ADSHANDLE *phConnect );
Furthermore, you can hide the path where your data resides by using a server side Alias. Look it up on the help files. It is quite simple.
To simplify things, do this:
Run ads server configuration utility, go to "Configuration Utility" tab and inside that, go to "File Locations" tab. Write down path for Error and Assert Log Path. Let's assume it is c:. Let's also assume server is 192.168.1.1.
Now create a file named AdsServer.ini in that path (c:) with section: [ServerAliases] and a line adsdata=c:\data. Now use API function AdsConnect60 like this: AdsConnect60( "\\192.168.1.1\Adsdata\Mydata.add", ADS_REMOTE_SERVER, "adssys", "password", ADS_DEFAULT, &hConn ) ;
If you are working from Delphi or some other language make sure you check out the clases that are already built wrappers for the API.
It is all really-really well documented: http://devzone.advantagedatabase.com/dz/WebHelp/Advantage11.1/index.html?ace_adsconnect60.htm

SSIS - best practices for connection managers -- compose out of parameters?

I've worked a lot with Pentaho PDI so some obvious things jump out at me.
I'll call Connection Managers "CMs" from here on out.
Obvious, Project CMs > Package CMs, for extensability/ re-usability. Seems a rare case indeed where you need a Package-level CM.
But I'm wondering another best practice. Should each Project CM itself be composed of variables? (or parameters I guess).
Let's talk in concrete terms. There are specific database sources. Let's call two of them in use Finance2000 and ETL_Log_db. These have specific connection strings (password, source, etc).
Now if you have 50 packages pulling from Finance2000 and also using ETL_Log_db ... well ... what happens if the databases change? (host, name, user, password?)
Say it's now Finance3000.
Well I guess you can go into Finance2000 and change the source, specs, and even the name itself --- everything should work then, right?
Or should you simply build a project level database called "FinanceX" or whatever and make it comprised of parameters so the connectoin string is something like #Source + # credentials + # whatever?
Or is that simply redundant?
I can see one benefit of the parameter method is that you can change the "logging database" on the fly even within the package itself during execution, instead of passing parameters merely at runtime. I think. I don't know. I don't have a mountain of experience with SSIS yet.
SSIS, starting from version 2012, has SSIS Catalog DB. You can create all your 50 packages in one Project, and all these packages share the same Project Connection Managers.
Then you deploy this Project into the SSIS Catalog; the Project automatically exposes Connection Manager parameters with CM prefix. The CM parameters are parts of the Connection Manager definition.
In the SSIS Catalog you can create so called Environments. In the Environment you define variables with name and datatype, and store its value.
Then - the most interesting part - you can associate the Environment and the uploaded Project. This allows you to bind project parameter with environment variable.
At Package Execution - you have to specify which Environment to use when specifying Connection Strings. Yes, you can have several Environments in the Catalog, and choose when starting Package.
Cool, isn't it?
Moreover, passwords are stored encrypted, so none can copy it. Values of these Environment Variables can be configured by support engineers who has no knowledge of SSIS packages.
More Info on SSIS Catalog and Environments from MS Docs.
I'll give my fair share of experience.
I recently had a similar experience at work, our 2 main databases name's changed, and i had no issue, or downtime on the schedules.
The model we use is not the best, but for this, and for other reasons, it is quite confortable to work with. We use BAT files to pass named parameters into a "Master" Job, and basically depending on 2 parameters, the Job runs on an alternate Database/Host.
The model we use is, in every KTR/KJB we use a variable ${host} and ${dbname}, these parameters are passed with each BAT file. So when we had to change the names of the hosts and databases, it was a simple Replace All Text Match in NotePad++, and done, 2.000+ BAT Files fixed, and no downtime.
Having a variable for the Host/DB Name for both Client Connection and Logging Connection lets you have that flexibility when things change radically.
You can also use the kettle.properties file for the logging connection.

How to determine at runtime if I am connected to production database?

OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address

Winforms ConnectionString and TeamCity

We are starting a new WinForms project and decided to use TeamCity to create builds and run unit and integration tests. The project deals with database. We have 3 databases (developDB (this is used by developers while developing =) ), testDB (this is used by teamcity to run tests) and productionDB(this is used by client)). TeamCity has 3 buildConfiguration. The first is triggered when commit happens. The second is triggered every night to run integration tests. And the third is triggered by developer when we what to make a release. So I want TeamCity to be able to change connectionString depending on what kind of build happens. Also I don't want to store connectionString in app.config (I don't want client to know the user and password). What options are available to perform the task?
Thanks in advance!
Updated
I use NHibernate and FluentNHibernate to connect to databases if it matters.
In this situation, I would use TeamCity to run a nant script to perform the build.
NAnt allows you to modify config file values (such as your connection string) at build time.
An example of using TeamCity/NAnt to deploy to different staging environments can be found at this blog post:
http://thecodedecanter.wordpress.com/2010/03/25/one-click-website-deployment-using-teamcity-nant-git-and-powershell/
As #surfen suggests, the connection string values for each environment should be encrypted to prevent credentials from being stored in plain text.
I have not used TeamCity, but I have written multiple applications with dynamically changing ConnectionStrings during logon process (ie. at runtime), and It's quite simple.
You didn't tell how do you connect to your Database. Since you mention app.config, I suppose it is ADO.NET DataSets or simmilar technology, which creates a read-only(getter) ConnectionString in your Settings.Designer.cs / app.config.
What I did, was to create a setter method in Settings.cs (not Settings.Designer.cs) for the ConnectionString property like this:
public void setNorthwindConnectionString(String value) {
this["NorthwindConnectionString"] = value;
}
My generated DataSet then uses this NorthwindConnectionString for accessing data.
You can use preprocessor directives for conditional setup of your ConnectionString:
#if DEBUG
Console.WriteLine("Mode=Debug");
Settings.Default.setNorthwindConnectionString("(DebugDBConnectionString)");
#else
Console.WriteLine("Mode=Release");
Settings.Default.setNorthwindConnectionString("(ReleaseDBConnectionString)");
#endif
You could also encrypt your connection strings, and copy the right app.config during post build event.
I am assuming you would be using msbuild to build your projects in Team city. If that is the case, then you can send the Conditional Compilation Symbol where in you can pass what ever symbols you need.
Once you have the symols, you can do things like:
#if DEVBUILD
//.... Your Connection String Code here
#endif
#if INTBUILD
.... Your Connection String Code here
#endif
That's the answer to your frst question.
Looking at the second part of your question, where in you do not want to store the user name & password in the app.config,
Options:
try intergrated security, it will use your domain account
if option cannot be used, try keeping your connection string as a Registry Key, so that its not obvious or an Environment variable.

Why django checks whether settings.DATABASE_NAME db actually exists for running testcases?

I will be frequently running testcases for my django project. But one
fine day it occured to me that django actually checks the
settings.DATABASE_NAME db actual existence while running testcases.
Why is this so. All I thought was django will be taking the
settings.DATABASE_NAME and creates a test db called 'test_' +
settings.DATABASE_NAME. It also checks whether the database with the
name = settings.DATABASE_NAME, is actually existing or not(for
creating the test db)? Ideally speaking, only name should be checked
but not the actual existence of the db right?
I browsed through the django source code and found out that the "connection" which is used to create the testdb actually is created using DATABASE setting options. All it should be bothered about settings' values and not their actual existence. Right?
Neat question... you know, this had never occurred to me. The short answer is that Django itself doesn't need to verify that the DATABASE_NAME actually exists, but it does need to connect to the database in order to create the test database. Most databases accept (and some require) the DATABASE_NAME in order to formulate the connection string; often this is because the database name to which you're connecting contributes to the permissions for your connection session.
Because the test database doesn't exist yet, django has to first connect using the normal settings.DATABASE_NAME in order to create the test database.
So, it works like this:
Django's test runner passes off to the backend-specific database handler
The backend-specific database handler has a function called create_test_db which will use the normal settings to connect to the database. It does this using a plain cursor = self.connection.cursor() command, which obviously uses the normal settings values because that's all it knows to be in existence at this point.
Once connected to the database, the backend-specific handler will issue a CREATE DATABASE command with the name of the new test database.
The backend-specific handler closes the connection, then returns to the test runner, which swaps the normal settings.DATABASE_NAME for the test_database_name
The test will then run as normal. All subsequent calls to connection.cursor() will use the normal settings module, but now that module has the swapped out database name
At the end, the test runner restores the old database name after calling the backend-specific handler's destroy_test_db function.
If you're interested, the relevant code for the main part is in django.db.backends.creation. Have a look at the _create_test_db function.
I suppose that it would be possible for the Django designers to make exceptions on a db-by-db basis since not every DB needs the current database name in the connection string, but that would require a bit of refactoring. Right now, the create_test_db function is actually in one of the backend base classes, and most actual backend handlers don't override it, so there's be a fair amount of code to push downstream and to duplicate in each backend.

Resources