The database for one the application that I am working on is not confirmed yet by the business.
Best guess is Oracle and DB2.
What I've heard is initially the project will go live with DB2 V9 and then to Oracle 11g.
We are using Spring 3.0.5, Hibernate 3.5, JPA2 and JBoss 5 for this project
So what are the best practices here going into the build phase and test phase?
Shall I build using DB2 first and worry about Oracle later (this
doesn't sound right)?
Or, shall I write using JPA (Hibernate) and
then generate the database schema?
Or something else?
PS: I've no control over the choice of the DB, what and when, as these are strategic decision made by people sitting in nice rooms getting fat cheques and big bonuses.
Thanks,
Adi
Obviously you are loosing the access to specific features of the database if you are writing your application database agnostic. The database is, except for automatic optimizations done by JPA and Hibernate, reduced to common features. You have to set some things to automatic and trust JPA/Hibernate to do it right that you could set specifically if you knew the database (e.g. id generator strategies).
But it seems that the specific developer features of the database are not relevant for the decision so they can't be relevant to the application. What other reasons may influence the decision (like price, money, cash, personal relations, management tools, hardware requirements, existing knowledge and personell) can only be speculated about.
So IMHO you don't have a choice. Strictly avoid anything database specific. That includes letting the JPA/Hibernate generate the schema (your point #2). In this project setup you shouldn't tinker with the database manually.
Well... sadly there ARE some hidden traps in JPA/Hibernate developement that make it database dependent (e.g. logarithmic functions are not mapped consistenly). So you should run all your tests against all possible databases from day one. As you write "Best guess is..." you should just grab any database available and test against it. Should be easly setup with the given stack.
And you should try to accelerate the decision about the database used, if possible.
Just "write using JPA (Hibernate)" develop it to be de database agnostic. Put all you business logic in java code not stored procedures.
If you are using spring you don't need jboss you could use just tomcat, about a quarter of the foot print, and much simpler imho.
Spring vs Jboss and jboss represents all that is bad, while spring represents all that is good in Java enterprise development
We have add this issue and had to migrate late in the project, leading to a lot of extra works, frustrations and delays.
My advise is to define an abstract layer. Go to the point you may have a data model without any database, say with tables or text files.
Then when you have to switch to some database, you can optimize for it, while staying free to continue application development on any already developped model. So you don't delay the developpers on the app while one is tuning the DB2 layer. When everything is duly validated, the team can switch on it.
I will disagree with the currently accepted answer suggesting avoiding database specific things. From a performance perspective, that would be a pity, and it's definitely doable.
JPA/Hibernate and also jOOQ can abstract over a lot of things and if you're using the query builder APIs of either technology (criteria query in JPA, or jOOQ for more advanced SQL), you can get very far in a vendor agnostic way without removing all the vendor specific stuff. For example, you can easily create a vendor specific predicate like this:
.where(oracle ? oracleCondition() : db2Condition())
What you should do from the very beginning of such a project, once you know you'll have to support both dialects is to run integration tests on both database products. For this, I recommend testcontainers, which makes running such tests quite simple. If you have to add support for another dialect, and if you're using one of the above abstractions, you can simply add another testcontainers configuration, check if your application still works, tweak 2-3 things, and you're set.
Disclaimer: I work for the company behind jOOQ.
I want to build a web-application similar to Google-Analytics, in which I collect statistics on my customers' end-users, and show my customers analysis based on that data.
Characteristics:
High scalability, handle very large volume
Compartmentalized - Queries always run on a single customer's data
Support analytical queries (drill-down, slices, etc.)
Due to the analytical need, I'm considering to use an OLAP/BI suite, but I'm not sure it's meant for this scale. NoSQL database? Simple RDBMS would do?
These what I am using at work in a production environnement and it works like a charm.
I copled three things
PostgreSQL + LucidDB + Mondrian (More generally the whole Pentaho BI suite components)
PostgreSQL : I am not going to describe postgresql, really strong open source RDBMS will let you do - certainly - everything you need. I use it to store my operational data.
LucidDB : LucidDB is an Open source column-store database. Highly scalable and will provide a really gain of processing time compare to PostgreSQL for retrieving a large amount of data. It is not optimized for transaction processing but for intensive reads. This is my Datawarehouse database
Mondrian : Mondrian is an Open Source R-OLAP cube. LucidDB made it easy to connect those two programs together.
I would recommend you to look at the whole Pentaho BI Suite, it worth it, you might want to use some of there components.
Hope I could help,
There are two main architectures you could opt for for true web-scale:
1. "BI" architecture
Event journaller (e.g. LWES Journaller) or immutable event store (e.g. HDFS) feeds
Analytics/column-store database (e.g. Greenplum, InfiniDB, LucidDB, Infobright) feeds
Business intelligence reporting tool (e.g. Microstrategy, Pentaho Business Analytics)
2. "NoSQL" architecture
(Optional) Event journaller or immutable event store feeds
NoSQL database (e.g. Cassandra, Riak, HBase) feeds
A custom analytics UI (e.g. using D3.js)
The immutable event store or journaller is there because in most cases you want to be batching your analytics events and doing bulk updates to your database (even with something like HDFS) - rather than doing an atomic write for every single page view etc.
For SnowPlow, our open-source analytics platform built on Hadoop and Hive, the event logs are all collected on S3 first before being batch loaded into Hive.
Note that the "NoSQL architecture" will involve a fair bit more development work. Remember that with either architecture, you can always shard by customer if the volumes grow truly epic (billions of rows per customer) - because there's no need (I'm guessing) for cross-customer analytics.
I'd say that having put in place OLAP analysis is always nice and then has great potential for sophisticated data analysis using MDX.
What do you mean by large volume ?
Where are your customer user information?
What kind of front-end and reporting are you going to use?
Cheers.
Disclaimer : I'll make some publicity for my own solution - have a look to www.icCube.com and contact me for more details
I'm currently developing a small WPF application using a file database (SQLCe).
Since I'm near release of the product and I've had no experience with setup and deployment I would like to hear your thoughts around this subject.
The application is small and the updates that I will make is minor database changes (such as alter tables, columns etc.) and dll updates.
I've tried to play around with ClickOnce deployment but I don't understand how updates to a database should be handled.
On the other hand a standard Setup and Deployment project feels rather complex for just a couple of database updates and dll replacements?
Which one of these two "tools" would you recommend for my given scenario?
Are there any best practises or other tools that can ease the setup and deployment work?
Cheers!
Try NSIS http://nsis.sourceforge.net/. It's a good tool and allows for custom update programs to be written quite easily. This would be able to handle all of the .dll replacements and is very suited for this type of deployment.
In terms of the database updates; if you’re going to be writing scripts to update the data base tables, you will need to consider how you’re going to connect to the local instance of the data base to run the scripts against the DB. The more automated the solution, you may want to consider an application to look up the location of the DB and execute the scripts at the run time of the NSIS script being run.
Small overhead, with a lot of flexibility.
There has been some discussion on the SO community wiki about whether database objects should be version controlled. However, I haven't seen much discussion about the best-practices for creating a build-automation process for database objects.
This has been a contentious point of discussion for my team - particularly since developers and DBAs often have different goals, approaches, and concerns when evaluating the benefits and risks of an automation approach to database deployment.
I would like to hear some ideas from the SO community about what practices have been effective in the real world.
I realize that it is somewhat subjective which practices are really best, but I think a good dialog about what work could be helpful to many folks.
Here are some of my teaser questions about areas of concern in this topic. These are not meant to be a definitive list - rather a starting point for people to help understand what I'm looking for.
Should both test and production environments be built from source control?
Should both be built using automation - or should production by built by copying objects from a stable, finalized test environment?
How do you deal with potential differences between test and production environments in deployment scripts?
How do you test that the deployment scripts will work as effectively against production as they do in test?
What types of objects should be version controlled?
Just code (procedures, packages, triggers, java, etc)?
Indexes?
Constraints?
Table Definitions?
Table Change Scripts? (eg. ALTER scripts)
Everything?
Which types of objects shouldn't be version controlled?
Sequences?
Grants?
User Accounts?
How should database objects be organized in your SCM repository?
How do you deal with one-time things like conversion scripts or ALTER scripts?
How do you deal with retiring objects from the database?
Who should be responsible for promoting objects from development to test level?
How do you coordinate changes from multiple developers?
How do you deal with branching for database objects used by multiple systems?
What exceptions, if any, can be reasonable made to this process?
Security issues?
Data with de-identification concerns?
Scripts that can't be fully automated?
How can you make the process resilient and enforceable?
To developer error?
To unexpected environmental issues?
For disaster recovery?
How do you convince decision makers that the benefits of DB-SCM truly justify the cost?
Anecdotal evidence?
Industry research?
Industry best-practice recommendations?
Appeals to recognized authorities?
Cost/Benefit analysis?
Who should "own" database objects in this model?
Developers?
DBAs?
Data Analysts?
More than one?
Here are some some answers to your questions:
Should both test and production environments be built from source control? YES
Should both be built using automation - or should production by built by copying objects from a stable, finalized test environment?
Automation for both. Do NOT copy data between the environments
How do you deal with potential differences between test and production environments in deployment scripts?
Use templates, so that actually you would produce different set of scripts for each environment (ex. references to external systems, linked databases, etc)
How do you test that the deployment scripts will work as effectively against production as they do in test?
You test them on pre-production environment: test deployment on exact copy of production environment (database and potentially other systems)
What types of objects should be version controlled?
Just code (procedures, packages, triggers, java, etc)?
Indexes?
Constraints?
Table Definitions?
Table Change Scripts? (eg. ALTER scripts)
Everything?
Everything, and:
Do not forget static data (lookup lists etc), so you do not need to copy ANY data between environments
Keep only current version of the database scripts (version controlled, of course), and
Store ALTER scripts: 1 BIG script (or directory of scripts named liked 001_AlterXXX.sql, so that running them in natural sort order will upgrade from version A to B)
Which types of objects shouldn't be version controlled?
Sequences?
Grants?
User Accounts?
see 2. If your users/roles (or technical user names) are different between environments, you can still script them using templates (see 1.)
How should database objects be organized in your SCM repository?
How do you deal with one-time things like conversion scripts or ALTER scripts?
see 2.
How do you deal with retiring objects from the database?
deleted from DB, removed from source control trunk/tip
Who should be responsible for promoting objects from development to test level?
dev/test/release schedule
How do you coordinate changes from multiple developers?
try NOT to create a separate database for each developer. you use source-control, right? in this case developers change the database and check-in the scripts. to be completely safe, re-create the database from the scripts during nightly build
How do you deal with branching for database objects used by multiple systems?
tough one: try to avoid at all costs.
What exceptions, if any, can be reasonable made to this process?
Security issues?
do not store passwords for test/prod. you may allow it for dev, especially if you have automated daily/nightly DB rebuilds
Data with de-identification concerns?
Scripts that can't be fully automated?
document and store with the release info/ALTER script
How can you make the process resilient and enforceable?
To developer error?
tested with daily build from scratch, and compare the results to the incremental upgrade (from version A to B using ALTER). compare both resulting schema and static data
To unexpected environmental issues?
use version control and backups
compare the PROD database schema to what you think it is, especially before deployment. SuperDuperCool DBA may have fixed a bug that was never in your ticket system :)
For disaster recovery?
How do you convince decision makers that the benefits of DB-SCM truly justify the cost?
Anecdotal evidence?
Industry research?
Industry best-practice recommendations?
Appeals to recognized authorities?
Cost/Benefit analysis?
if developers and DBAs agree, you do not need to convince anyone, I think (Unless you need money to buy a software like a dbGhost for MSSQL)
Who should "own" database objects in this model?
Developers?
DBAs?
Data Analysts?
More than one?
Usually DBAs approve the model (before check-in or after as part of code review). They definitely own performance related objects. But in general the team own it [and employer, of course :)]
I treat the SQL as source-code when possible
If I can write it in standard's compliant SQL then it generally goes in a file in my source control. The file will define as much as possible such as SPs, Table CREATE statements.
I also include dummy data for testing in source control:
proj/sql/setup_db.sql
proj/sql/dummy_data.sql
proj/sql/mssql_specific.sql
proj/sql/mysql_specific.sql
And then I abstract out all my SQL queries so that I can build the entire project for MySQL, Oracle, MSSQL or anything else.
Build and test automation uses these build-scripts as they are as important as the app source and tests everything from integrity through triggers, procedures and logging.
We use continuous integration via TeamCity. At each checkin to source control, the database and all the test data is re-built from scratch, then the code, then the unit tests are run against the code. If you're using a code-generation tool like CodeSmith, it can also be placed into your build process to generate your data access layer fresh with each build, making sure that all your layers "match up" and do not produce errors due to mismatched SP parameters or missing columns.
Each build has its own collection of SQL scripts that are stored in the $project\SQL\ directory in source control, assigned a numerical prefix and executed in order. That way, we're practicing our deployment procedure at every build.
Depending on the lookup table, most of our lookup values are also stored in scripts and run to make sure the configuration data is what we expect for, say, "reason_codes" or "country_codes". This way we can make a lookup data change in dev, test it out and then "promote" it through QA and production, instead of using a tool to modify lookup values in production, which can be dangerous for uptime.
We also create a set of "rollback" scripts that undo our database changes, in case a build to production goes screwy. You can test the rollback scripts by running them, then re-running the unit tests for the build one version below yours, after its deployment scripts run.
+1 for Liquibase:
LiquiBase is an open source (LGPL), database-independent library for tracking, managing and applying database changes. It is built on a simple premise: All database changes (structure and data) are stored in an XML-based descriptive manner and checked into source control.
The good point, that DML changes are stored semantically, not just diff, so that you could track the purpose of the changes.
It could be combined with GIT version control for better interaction. I'm going to configure our dev-prod enviroment to try it out.
Also you could use Maven, Ant build systems for building production code from scripts.
Tha minus is that LiquiBase doesnt integrate into widespread SQL IDE's and you should do basic operations yourself.
In adddition to this you could use DBUnit for DB testing - this tool allows data generation scripts to be used for testing your production env with cleanup aftewards.
IMHO:
Store DML in files so that you could
version them.
Automate schema build process from
source control.
For testing purposes developer could
use local DB builded from
source control via build system +
load testing Data with scripts, or
DBUnit scripts (from Source
Control).
LiquiBase allows you to provide "run
sequence" of scripts to respect
dependences.
There should be DBA team that checks master
brunch with ALL changes
before production use. I mean they
check trunk/branch from other DBA's
before committing into MASTER trunk.
So that master is always consistent
and production ready.
We faced all mentioned problems with code changes, merging, rewriting in our billing production database. This topic is great for discovering all that stuff.
By asking "teaser questions" you seem to be more interested in a discussion than someone's opinion of final answers. The active (>2500 members) mailing list agileDatabases has addressed many of these questions and is, in my experience, a sophisticated and civil forum for this kind of discussion.
I basically agree with every answer given by van. Fore more insight, my baseline for database management is K. Scott Allen series (a must read, IMHO. And Jeff's opinion too it seems).
Database objects can always be rebuilt from scratch by launching a single SQL file (that can itself call other SQL files) : Create.sql. This can include static data insertion (lists...).
The SQL scripts are parameterized so that no environment-dependent and/or sensitive information is stored in plain files.
I use a custom batch file to launch Create.sql : Create.cmd. Its goal is mainly to check for pre-requisites (tools, environment variables...) and send parameters to the SQL script. It can also bulk-load static data from CSV files for performance issues.
Typically, system user credentials would be passed as a parameter to the Create.cmd file.
IMHO, dynamic data loading should require another step, depending on your environment. Developers will want to load their database with test, junk or no data at all, while at the other end production managers will want to load production data. I would consider storing test data in source control as well (to ease unit testing, for instance).
Once the first version of the database has been put into production, you will need not only build scripts (mainly for developers), but also upgrade scripts (based on the same principles) :
There must be a way to retrieve the version from the database (I use a stored procedure, but a table would do as well).
Before releasing a new version, I create an Upgrade.sql file (that can call other ones) that allows upgrading version N-1 to version N (N being the version being released). I store this script under a folder named N-1.
I have a batch file that does the upgrade : Upgrade.cmd. It can retrieve the current version (CV) of the database via a simple SELECT statement, launch the Upgrade.sql script stored under the CV folder, and loop until no folder is found. This way, you can automatically upgrade from, say, N-3 to N.
Problems with this are :
It is difficult to automatically compare database schemas, depending on database vendors. This can lead to incomplete upgrade scripts.
Every change to the production environment (usually by DBAs for performance tuning) should find its way to the source control as well. To make sure of this, it is usually possible to log every modification to the database via a trigger. This log is reset after every upgrade.
More ideally, though, DBA initiated changes should be part of the release/upgrade process when possible.
As to what kind of database objects do you want to have under source control ? Well, I would say as much as possible, but not more ;-) If you want to create users with passwords, get them a default password (login/login, practical for unit testing purposes), and make the password change a manual operation. This happens a lot with Oracle where schemas are also users...
We have our Silverlight project with MSSQL database in Git version control. The easiest way is to make sure you've got a slimmed down database (content wise), and do a complete dump from f.e. Visual Studio. Then you can do 'sqlcmd' from your build script to recreate the database on each dev machine.
For deployment this is not possible since the databases are too large: that's the main reason for having them in a database in the first place.
I strongly believe that a DB should be part of source control and to a large degree part of the build process. If it is in source control then I have the same coding safe guards when writing a stored procedure in SQL as I do when writing a class in C#. I do this by including a DB scripts directory under my source tree. This script directory doesn't necessarily have one file for one object in the database. That would be a pain in the butt! I develop in my db just a I would in my code project. Then when I am ready to check in I do a diff between the last version of my database and the current one I am working on. I use SQL Compare for this and it generates a script of all the changes. This script is then saved to my db_update directory with a specific naming convention 1234_TasksCompletedInThisIteration where the number is the next number in the set of scripts already there, and the name describes what is being done in this check in. I do this this way because as part of my build process I start with a fresh database that is then built up programatically using the scripts in this directory. I wrote a custom NAnt task that iterates through each script executing its contents on the bare db. Obviously if I need some data to go into the db then I have data insert scripts too. This has many benefits too it. One, all of my stuff is versioned. Two, each build is a fresh build which means that there won't be any sneaky stuff eking its way into my development process (such as dirty data that causes oddities in the system). Three, when a new guy is added to the dev team, they simply need to get latest and their local dev is built for them on the fly. Four, I can run test cases (I didn't call it a "unit test"!) on my database as the state of the database is reset with each build (meaning I can test my repositories without worrying about adding test data to the db).
This is not for everyone.
This is not for every project. I usually work on green field projects which allows me this convenience!
Rather than get into white tower arguments, here's a solution that has worked very well for me on real world problems.
Building a database from scratch can be summarised as managing sql scripts.
DBdeploy is a tool that will check the current state of a database - e.g. what scripts have been previously run against it, what scripts are available to be run and therefore what scripts are needed to be run.
It will then collate all the needed scripts together and run them. It then records which scripts have been run.
It's not the prettiest tool or the most complex - but with careful management it can work very well. It's open source and easily extensible. Once the running of the scripts is handled nicely adding some extra components such as a shell script that checks out the latest scripts and runs dbdeploy against a particular instance is easily achieved.
See a good introduction here:
http://code.google.com/p/dbdeploy/wiki/GettingStarted
You might find that Liquibase handles a lot of what you're looking for.
Every developer should have their own local database, and use source code control to publish to the team. My solution is here : http://dbsourcetools.codeplex.com/
Have fun,
- Nathan