Any solution as how a database version tool should be? - database

I am trying to making a tool which can help in maintaining data base version(like maintaining source code version). The technology which I am thinking to use is spring-hibernate so that the tool can be web based and it can be used by multiple project . The idea is that any database change can only be triggered with the help of this tool,so that the database version information can be maintained and the database can be made consistent .Operations like commit,roll back,branching,merging should be possible. Can you suggest me that how should I approach to this problem?
I have found an opensource tool called LiquidBase which has already provided some sort of solution in maintaining database version. Here is a short preview on what this tool can do. But this tool has some limitations like it does not handle stored procedures and triggers and it works on the basis of an XML file . But I think I can integrate this tool with my requirement and I can speed up development. If you have any other tool in knowledge which can be better than this then please let me know.
If possible tell me that how the tool should be organized so that different project can easily maintain their database version. What all problem the tool should try to address and what minimum support should at least be there in this tool? What should be the UI so that user should be easily able to use it.?

Related

Semantria Integration with DB

I need to know, has someone integrated any DB to Semantria, and get output to any DB or excel or text file ?
I have tried to explore semantria via excel and API , but integration does not work perfectly.
It depends on what kind of integration you're looking for.
I have already done many integrations with different storages including indexing services and RDBMS solutions.
Unfortunately there are no ready-to-use components available on the market, so you will need to build integration by your own.
Semantria offers SDK (https://github.com/Semantria/semantria-sdk) for all modern languages, you will need to build a logic that will get analysis results and will save them to a certain storage.
Can you please explain what storage do you use and what Semantria output you're interested in?
Thanks George.
Well at the moment, we are just focusing on pulling the data from DB (take for instance mySQL, or Oracle), and output should again go back to same DB, i will take care of transformation needed in o/p.
Now where I am stuck, is the place where I can set up a link between DB and semantria, how will these SDK help, never worked on something like this.
A brief on this will surely be of great help

how to backup all salesforce metadata

I'm trying to figure out the best way to backup all of our Salesforce metadata in our full sandbox.
We've had a large team working on numerous areas of Salesforce (configuration and development) and we've promoted all that code to our full sandbox. Before moving to production, we want to backup all the metadata. We are not concerned about actual data. We just want to make sure we backup all the metadata in our full sandbox, then promote to our production instance and finally do a refresh of our full sandbox.
We thought about using a change set, but that would be horribly tedious, time-consuming and would it indeed grab all metadata.
Would creating an unmanaged package be an option? I've never done anything with packages, so I'm in the dark on that process. Would it be easy to grab all the metadata?
I've read about options using the ANT Tool, which I have no experience using and it seems to be a little tricky to setup and configure.
I use Eclipse regularly, I don't believe Eclipse can grab all the metadata (approval processes, etc.)?
Any insight and help on solving this would be greatly appreciated.
Thank you.
I'd like to suggest using a version control. It's the best way for managing all changes into your project, store history of changes and comfortable team work. I prefer git but you can select any other.
For managing changes which can't be retrieved via Eclipse/ant migration tool or any other tool I use file named "NonMetaDataChanges" which stores all configuration steps which should be performed on fresh org for setup application before and after deployment of metadata. usually these manual changes takes no more than half a hour.
Also I've just checked that Approval Process can be retrieved via Eclipse.
Isn't the easiest way to create a metadata backup to create or refresh a sandbox? Other than custom settings, what will it miss? With the sandbox change-sets could be create to return production to a happier place.
The best path, I agree, is to put all changes under version control. But until all metadata can be extracted and re-imported, some kind of all-of-the-above approach must do.

Software packages to create graphs or charts from a database full of numbers?

I have a device which generates a bunch of statistics once per second. All of the statistics are stored in a PostgreSQL database on a Ubuntu server.
I'd like to create a web interface to prompt the user for a time range and which values to graph. I'm also thinking this kind of thing is common when people have databases full of numbers, so it must already exist. Problem is I don't know what terms to google to find relevant software packages. So far, the only 2 I've found are php5-rrd, and Carbon/Graphite.
The PHP5-RRD solution seems simple enough, though I'm worried I'll be needlessly re-inventing the wheel. Can anyone recommend other similar software packages that can help generate a bunch of "live" charts or graphs with a web front-end?
Try this d3.js tutorial. Depending on your needs it might solve your problem with a way simpler solution than whatever you were thinking.
Edit: if you want to learn the very basics of d3.js, I recommend Scott Murray's tutorials.
Depending on your needs, you can try:
BIRT
Saiku
Shiny (RStudio)
Or you can google charting library or try something from this article
Instead of storing things in a heavy PostgreSQL database, I did eventually change my app to use RRD (round robin database). Lots of ways to easily get and store information in RRDs.
# on Ubuntu:
sudo apt-get install rrdtool
Once I had my RRD files, it was trivial to use the usual RRD tools combined with PHP and the free Google Charts to generate the many different graphs I needed. Google Charts by itself is an amazing project worth highlighting: https://developers.google.com/chart/

Incremental development with subsonic

I'm in the process of starting up a web site project. My plan is to roll out the site in a somewhat rudimentary form first and then add to the site functionality along the way.
I'm using Subsonic 3 for my DAL, and I'm expecting the database will go through multiple versions as the sites evolve. This means I'll need some kind of versioning and migration tools. I understand that Subsonic has built in migration possibilities, but I'm having difficulties grasping how to use these tools, in my scenario.
First there's the SimpleRepository model, where the Subsonic "automagically" handles the migrations as i develop my site. I can see how this works on my dev-machine, but I'm not sure how to handle deployments with this.
Would Subsonic run the necessary migrations on my live site as the appropriate methods are called?
Is there some way I can force all necessary migrations on a site while taking the site offline, when using the Simplerepository model? (Else I would expect random users to experience severe performance cuts, as the migration routines kick in)
Would I be better off using the ActiveRecord model, and then handling migrations with the Subsonic.Schema.Migrator? (I suspect so)
Do you know of any good resources explaining how to handle this situation with the migrator? (I read the doc, but I can't piece together how I would use this in practice)
Thanks for listening/replying.
Regards
Jesper Hauge
I would advise against ever running migrations against a live site. SubSonic's migrations are really there to make development simpler and should never be used against a live environment. To be honest even using SubSonic.Schema.Migrator you're still going to bump into the fact that refactoring databases is an incredibly hard problem. For example renaming a column in a table using management studio is trivial, but what happens in the background involves creating an entirely new table and migrating all the constraints, data etc. before renaming the new table.
The most effective way I've found for dealing with this is:
Script all database changes as you make them in your development environment (SQL Server Management Studio will do this for you) and add these scripts to your source control.
As part of deployment (obviously backup first) run the migration scripts and then deploy the updated application on success.
Whether you use ActiveRecord or SimpleRepository is then down to whether you want the extra features/complexity of ActiveRecord.
Hope this helps
i would use activerecord easy to use and any changes you just run the TT files, you would then just build or publish your slution and done ???? SVN will keep your multiple versions of the build stage so if you make a tit of it you just drop back a revision.

Generate Data Change Scripts from VSTS Database Edition

I'm using the GDR release of VSTS Database edition source control the DB and generate deployment scripts. It works pretty well but the problem is that it only seems to handle scripting and deploying the schema. It stops short of handling scripting and deployment of the actual data itself (i.e. the lookup and standing data which also deployed with the DB).
I know it's easy enough to write the deployment scripts by hand, but is this what every one does? Is there a recommended way of deploying data with the VSTS deployment engine? Is there some tooling that help with this - I don't mean a full product like SQLCompare, just something that fills the gap with VSTS DB.
Thanks in advance.
Kaneda
The VSTS: DB best practices blog advocates using post-deployment scripts to insert reference data into temporary tables, then update the target tables based on the delta (ie update x inner join temp where x.something <> temp.something)
There's some suggestions floating around that this might make a powertool, and at least one MVP has written a tool to generate those scripts.
(NB: I haven't tried this - I only just found out about it myself)
Personally I would still stick with RedGate if I had any choice in the matter.
GDR comes with a data comparison engine, but as far as I've been able to tell so far a data comparison can't even be stored in a project (let alone be properly supported by it) - so it's pretty ad-hoc. Unlike a Schema Compare, there is no File \ Save As.
The comparison engine can be automated via DDE but that's automation within the Visual Studio IDE, and not really suitable for some kind of scripted installation process. As much as anything there's no way I could see to specify which tables to include in the comparison (since all you get to do via DDE is open the wizard for the user to select)
Alternatively all the functionality appears to reside in Microsoft.VisualStudio.TeamSystem.DataPackage.dll , but since the API documentation hasn't been written yet (the help doco that comes with GDR is full of errors as it is) it's going to be a bit of a hit-and-miss adventure to work out where to start.
As someone who's used RedGate's SqlCompare, SqlDataCompare and their respective APIs to do this before, much of the GDR functionality seems a bit half-baked to me.
What I will probably do this time round is sync the data with a SSIS package (export to CSV at build time / import from CSV at install time), but I'd far rather be using the SqlDataCompare API (or SqlPackager) right now.

Resources