Version Controlling and Release Logs Maintaining Mechanism for Oracle - database

We have an application developed over Oracle 10 G (DS) forms connected with the Oracle Database in which time by time there are changes we need to make in scripts and procedures defined.
Task assigned to our group is to find out possible Version Controlling and Release Logs Maintaining Mechanism that could record every change made and release finalized in database.
I want a word of suggestion from all the experienced people out here, what could be the best possible solution of our problem, ideally a single solution or multiple ones.
(I am not very Oracle Form-Literate, so apologizes if I sounded confusing)

Have a look at this and this.
The first link is about .Net projects, but gives you concrete examples for how to set up your development processes; the second link is a general approach from Martin Fowler, who is a bit of an authority on software development.
The basics are that you have to script/automate as much of the deployment lifecycle as possible, and version everything.
I don't know much about Oracle Forms, but as far as I know, this approach should work.

Related

Revisiting MS Access as Enterprise Software

It's been 10 years since this question was asked and answered here and I'd like to see what current thoughts are.
We have a third party app that we've supported for at least that long. It's an Access runtime application that connects to SQL Server and contains highly confidential data.
Some years ago we moved the database to an SQL Server running on Server Core. More recently we've been asked to run the first upgrade of the database schema in 6 years. The vendor provided upgrade package appears to be built using VB6 and won't run on the server. It also doesn't support running the updates remotely. We have a couple of ways that we can get it done but it has presented me with an opportunity to finally move on from what I think is not an enterprise product.
As part of that I've been asked why I think this product is so bad and, in my estimation, antiquated. My immediate internal response is that it's not a real application, it's Access. That's compounded by the fact that we're paying a pretty good bit for it and I think that there are better, more robust solutions now available that are also cheaper (I think in the end that's all that should matter).
That said I acknowledge that there my be some bias in my opinions on this particular app. Looking back at that old post a few things stand out.
I think there's a big difference between internally developed applications built this way and paid for solutions. Supporting an internally developed app written in Access may still have some positives. I don't think the positives pointed out in the top answer hold up when you're paying someone for it. The disadvantages are precisely what we're running in to.
Reporting isn't being done in Access. It's now mostly being done with outside tools. Most users want to see web based reporting.
A couple of the responses mentioned professional Access developers or this type of application being the COBOL of the 21st century. I think that's an apt description. I'm not sure professional Access developers still exist. How long should we try to maintain this and how long do we think the vendor will be able to?
I think the main mistake about Access is to consider it as a tool made for amateurs to develop applications. It can work this way, but keep in mind that amateur development will give you amateur applications, while professional development will give you professional results
Maybe this is the crux of my problem in particular. I'm not convinced that our application is 'professional'. It feels semi-pro if I'm generous. The VB6 updater is one clue and there are other components that have given me cause for concern over the years.
Fair or not, in my mind, most, if not all Access applications in the enterprise have these same issues. At the end of the day, the question is whether it serves the needs of the department using it.
Where does Access fit in the enterprise in 2019?

How do you handle versioning of Spotfire dashboards?

Natural thing about software is that you enhance it, thus you create next versions of it. How do you handle that in concern of Spotfire ?
At least two ways I can think of.
First, in 7.5 and above you can spin up a test node and copy down any dxp you want from live to develop in test. Once the "upgrade" or changes are complete you then would backup the live version to disk somewhere... anywhere you do other backups, and deploy the new version to live.
For pre-7.5 the idea is the same but you would have to create a test folder in live with restricted access to test your upgrade on a web player.
Strictly speaking of "what version are you on" in regards to Analytics like there is in software isn't really the same in my opinion. There should only be one version of the truth. If you are to run multiple versions you'd have to manage their updates separately for caching which is cumbersome in my opinion. Also, realizing the analytic has a GUID which relates to its information sources means that running them in parallel in the same environment will cause duplication.
If this isn't what you were shooting for I'd love for you to elaborate on the original post and clarify anything I assumed. Cheers mate.
EDIT
Regarding the changes in 7.5, see this article from Tibco starting on p.42 which explains that Spotfire has a new topology with a service oriented architecture. In 7.5 onward, IIS is no longer used and to access the web player you doesn't even go to the "web server" anymore. The application server handles all access and is the central point for authentication and management.

Using virtual machines for development

I've recently been given the role of managing or development environment which includes:
Managing the version control system (subversion) in which we typically have one major branch which is released to production every 6 months, a maintenance branch which is released every 2 month to fix non-major bugs found by users and a couple of branches related to bugs which just can't wait for the maintenance release.
Managing our databases so that we have a development database for each branch of the code
We've not long moved over to using the version control system and have had the following issues:
Developers who work on a number of branches concurrently can quite often end up developing against the wrong database (we have around 15 developers)
A lack of a decent strategy for managing the release of branches into production and the propagation back into other branches
A lack of a decent strategy for managing the databases associated with each branch (i.e. should we keep a script which is aligned with the production environment and then a script to bring each database user in line with the needs of the branch)
I had thought of using a Virtual Machine for each branch of the code (i.e. A VM containing an Oracle Express database user, a Coldfusion Administrator with the correct setup for things like data sources, and development tools like the IDE and Tortoise).
I was looking for any suggestions anybody might have to help with any of these issues as I'm finding it really difficult to manage the process. I understand that no 2 companies have the exact same setup but I'd welcome any help.
I think that the best solution for you can be to start using continuous integration applied to your product life cycle strategy.
You can read about it over the web:
Continuous integration
Great open-source framework for continues integration!
I hope this helps you, but your question is quite hard to answer 'cause there are a lot of parameters to answer which always very from company to company, you should consider hiring a consultant to help you. He/She will have to come to your company and help you decide and implement.
I would start by asking each of the developers why this kind of mistake happens. If a developer has recently made the mistake, then get them to explain how they did it and what might help them in future. Also talk to developers that have not recently made a mistake.
I'm assuming that you have a server with Oracle and all the different flavors of the db running on it using different port numbers. In that case you would create a new db instance to go with each branch and the problem is how to help the developer set up a context before working on the branch.
Tortoise SVN is a nice tool, but perhaps this is a situation where it would be better to have some kind of small app that does the checkout, and remove Tortoise from the machines. The small app could keep a window floating on screen showing the currently active branch, and it could handle checkout and checkin, as well as making sure the right port number is used.

How can I get my database under version control with Perl?

I've been looking at the options for getting our database schemas under version control. It seems that Ruby folks have got Rails Migrations, and .NET folks have got a few options (for instance this, this, and this). What about Perl?
I've seen this thread on PerlMonks which doesn't have much, although it mentions DBIX::Migration::Directories. Is anyone actually using this module, or some other module? Or do you roll your own DB migration solutions?
Gratuitous details:
We don't use DBIx::Class for the most part
We use MySQL
We use SVN
At work, we use a modified version of DBIx::Migration (it has some limitations, such as no more than 10 migrations). Then, you have a core schema that you've dumped from your database and when the version number is too low, you upgrade your database using the migrations from the migration schema directory.
I also highly recommend the Database Refactoring book. Amongst other things, it will give you excellent techniques for managing migrations safely in such a way that if you need to roll back, you don't lose data (such as when you drop a column you think you don't need).
To help with the automatic deprecation schedules it suggests, I've written Devel::Deprecate so that you don't need to remember when to do the deprecations. Your code will complain loudly for you (and only in testing, not in production).
Important: You'll periodically find that you're applying so many database migration levels with this technique that you'll sometimes need to "bump up" your minimum base migration because it takes too long to rebuild the database. Just take a new dump of the database at the desired migration level and remove all migrations less than or equal to that level.
Update: Fast forward a few years and today I recommend sqitch. It's designed from the ground up to handle the case of putting a database under version control without tying you to a particular programming language or VCS.
One very interesting project that's still probably a little young to rely on is Adam Kennedy's ORLite::Migrate which takes it's inspiration from Rails migrations. He wrote up a very interesting journal over at use.perl.org about his plans and I hope to keep an eye on it for the future.
It does appear that this package only works with SQLite at the moment but I think Adam's planning on building this out to be more database agnostic in the future.
In POPFile we use our own solution. We store a schema version number in the db and if the program detects that there is a newer schema, it will update the db accordingly. This is not exactly the best and most fun part of our code.
To be honest, I fail to see the advantage of using DBIx::Migration::Directories if you aren't already using DBIx::Class. You have to provide the SQL and the version numbers and the database handle. You might as well provide a little more code to find the sql file and and feed it to the database.
Of course, having the schema in version control is a great bonus.
We use a system similar to what Manni described. The two big disadvantages are:
Can't rollback schema changes (typically this is rare, not well tested and hard anyway so having to do it manually isn't a big deal IMO).
Using a sequential version number is a pain when you develop in multiple branches -- since you are using SVN this isn't as likely to be an issue as if you were using git though. :-)
The script script I use is here: database_update and there's a small example data file.
How about sqitch? It advertises itself as a "database change management application",
There is an interesting CPAN module (Database::Migrator). I have used it, and works fine in order to handle the migrations of your project.
Each migration goes into its own directory. Migrations are applied in sorted order, typically you name them starting with a number prefix. The migration directory can either contain files with SQL or Perl.

Version track, automate DB schema changes with django

I'm currently looking at the Python framework Django for future db-based web apps as well as for a port of some apps currently written in PHP. One of the nastier issues during my last years was keeping track of database schema changes and deploying these changes to productive systems. I haven't dared asking for being able to undo them too, but of course for testing and debugging that would be a great feature. From other questions here (such as this one or this one), I can see that I'm not alone and that this is not a trivial problem. Also, I found many inspirations in the answers there.
Now, as Django seems to be very powerful, does it have any tools to help with the above? Maybe it's even in their docs and I missed it?
There are at least two third party utilities to handle DB schema migrations, South and Django Evolution. I haven't tried either one, but I have heard some good things about South, though Evolution has been around a little longer.
Also, look at SchemaEvolution on the Django wiki. It is just a wiki page about migrating the db.
Last time I checked (version 0.97), syncdb will be able to add tables to sync your DB schema with your models.py file, but it cannot:
Rename or add a column on a populated DB. You need to do that by hand.
Refactorize your model (like split a table into two) and repopulate your DB accordingly.
It might be possible though to write a Django script to make the migration by playing with the two different managers, but that might take ages if your DB is large.
There was a panel session on DB schema changes at the recent DjangoCon; there is a video of the session (thanks to Google), which should provide some useful information on a number of these utilities.
And now there's also dmigrations. From announcement:
django-evolution attempts to address this problem the clever way, by detecting changes to models that are not yet reflected in the database schema and figuring out what needs to be done to bring the two back in sync. In contrast, dmigrations takes the stupid approach: it requires you to explicitly state the changes in a sequence of migrations, which will be applied in turn to bring a database up to the most recent state that reflects the underlying models.
This means extra work for developers who create migrations, but it also makes the whole process completely transparent—for our projects, we decided to go with the simplest system that could possibly work.
(My bold)
I heard lot of good about Django Schema Evolution Branch and those were opions of actual users. It mostely works out of the box and do what it should do.
U should lookup Dmigrations, it functions a little bit diffrent from django-eveoltions.
It shows you everything it is doing and for compliccated things it asks you for your intervetnions. It should be great.

Resources