Mining clearcase - clearcase

We recently had an incident at work where one of our sysadmins deleted the ClearQuest database without any backups.
We are left trying to use the data in Clearcase to rebuild activities and files modified due to this incident.
What would be a good plan of action for making a list of the activities and the files that changed due to this unexpected deletion?

Well, yes and no...
The ClearCase activity "name" will be the original CQ record ID and the headline of the activity will be the headline/summary from CQ. Unfortunately, that's about all (other than the change set) that it does contain from CQ.
You would have to create a new CQ user DB and reconnect the UCM projects to that user db. I'd make sure that the new CQ DB has a different name, if only to make it easy to see what's a leftover...

I don't think you can recover the CQ activities when the DB is gone. The easiest way probably is just disconnect the UCM integration to CQ, then all these CQ activities will become local CC activity.

Related

Database Upgrade Methodology

In general, the schema of a database will change over time. Between builds, zero to many schema changes can occur. What is a "best practice" for capturing these changes?
For example, let's say 2 developers are working on a project and using git for source control. They agree to have a build on Friday. Each go about their work, checking in changes with database migration scripts that update to the current schema. When person A gets person B's changes, how can they easily know which upgrade scripts to run? When a person is looking at a database on a server, how can they know which version they are on? If the database captures the version number, that means that on Friday, one of the people on the team had to say to everyone else "Ok, everyone check in, then I am going to write a script that updates the version number to the next version and check it in."
Is there a standard way to approach this? Thanks.
Consider writing one migration per database [structure] change, not per stable version of your system. Just like revisions of code: every change updates system and increments it's revision (not version).
Usually we store database revision (along with 'public' version) in a special table. Sometimes we store names of migration scripts that were applied to this database, but it's more complex solution. It's handy to inlude revision of database which would be after applying migration in the file name. Last line in migration script updates migration version in a special table.
To determine which migrations to apply to concrete developer's database, you just take all the migrations that have higher revision number than revision of database that is stored in a special table.

Database sharing/versioning

I have a question but I'm not sure of the word to use.
My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning.
Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else.
They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change.
For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge.
Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries.
This is a 2 millions and growing C++ app, so rewriting it is not possible!
Thanks for any ideas that you may give us!
J-F
The term you are looking for is Database Replication. You can google that to get more information about the topic (my personal experience is limited).
This was already done by ical (an old SunOS calendar app).
What you store/remember/transmit when the app makes the changes is not just the database contents, but the actual change log (e.g. "delete record with ID 1", "update record with ID 2 with these fields", "insert record with these fields")
That way you can apply these changes to master DB later on, AND to filter them before applying

How do you transfer the data from a CRM 4.0 database into another CRM 4.0 database?

Our client wants to transfer all the data in the Production CRM 4.0 database and use it in UAT. What is the best way to transfer the data over.
I would copy the database. Then use the Deployment Manager tool to import the organization (this will update the MSCRM_CONFIG database, etc.). You may have to be careful with things like running workflows - as last I recall these might save stuff like Organization Name in their serialized state.
Open SQL Server Management Studio
Right click on your source database and select Tasks/Back Up
Update the Destination to your desired location and click OK
Right click on the empty target database and choose Tasks/Restore Database
Select from Device and choose the file name for your backup, then click OK
Another option is to use scribe. this is considered the industry standard tool for moving CRM data around.
http://www.scribesoft.com/

How to update a database remotely?

I'm looking for a strategy to allow automatic updates for a number of databases at customer sites through a publish-subscribe kind of mechanism. Right now there is a datacenter which has all the master data that get fed through extractions from hundreds of databases out there. The problem is that, whenever I need to do create a new view in the remote customer databases, I have to manually roll out an installation patch and ask the users to run it (their sites are behind firewalls, so I can't remotely do that from my end). Ideally, I would like to have a "DDL image" of the customer database schema at the datacenter, and whenever any change happens to it, all the subscribing customer databases would update their table view codes. The target databases are mostly SQL Server 2005 and Oracle.
I heard the MS SQL replication services could do such a thing? What about Oracle? anybody had experience with such?
Thanks!
Not sure about existing solutions, but how about writing your own auto-update mechanism that would run on a timer on the client machines and pull the latest schemas and views from some service table in your master database? Your change wouldn't get propagated straight away to all sites and some sites would update before others, but they would all eventually see the changes.
Golden gate might fit your needs.

How do you track database changes in source control?

We use SQL Server 2000/2005 and Vault or SVN on most of our projects. I haven't found a decent solution for capturing database schema/proc changes in either source control system.
Our current solution is quite cumbersome and difficult to enforce (script out the object you change and commit it to the database).
We have a lot of ideas of how to tackle this problem with some custom development, but I'd rather install an existing tool (paid tools are fine).
So: how do you track your database code changes? Do you have any recommended tools?
Edit:
Thanks for all the suggestions. Due to time constraints, I'd rather not roll my own here. And most of the suggestions have the flaw that they require the dev to follow some procedure.
Instead, an ideal solution would monitor the SQL Database for changes and commit any detected changes to SCM. For example, if SQL Server had an add-on that could record any DML change with the user that made the change, then commit the script of that object to SCM, I'd be thrilled.
We talked internally about two systems:
1. In SQL 2005, use object permissions to restrict you from altering an object until you did a "checkout". Then, the checkin procedure would script it into the SCM.
2. Run a scheduled job to detect any changes and commit them (anonymously) to SCM.
It'd be nice if I could skip the user-action part and have the system handle all this automatically.
Use Visual studio database edition to script out your database. Works like a charm and you can use any Source control system, of course best if it has VS plugins. This tool has also a number of other useful features. Check them out here in this great blog post
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx
or check out MSDN for the official documentation
Tracking database changes directly from SSMS is possible using various 3rd party tools. ApexSQL Source Control automatically scripts any database object that is included in versioning. Commits cannot be automatically performed by the tool. Instead, the user needs to choose which changes will be committed.
When getting changes from a repository, ApexSQL Source Control is aware of a SQL database referential integrity. Thus, it will create a synchronization scripts including all dependent objects that will be wrapped in a transactions so, either all changes will be applied in case no error is encountered, or none of the selected changes is applied. In any case, database integrity remains unaffected.
I have to say I think a visual studio database project is also a reasonable solution to the source control dilemma. If it's set up correctly you can run the scripts against the database from the IDE. If your script is old, get the latest, run it against the DB. Have a script that recreates all the objects as well if you need, new objects must be added to the this script as well by hand, but only once
I like every table, proc and function to be in it's own file.
One poor man's solution would be to add a pre-commit hook script that dumps out the latest db schema into a file and have that file committed to your SVN repository along with your code. Then, you can diff the db schema files from any revision.
I just commit the SQL-alter-Statement additional to the complete SQL-CreateDB-statement.
Rolling your own from scratch would not be very doable, but if you use a sql comparison tool like Redgate SQL Compare SDK to generate your change files for you it would not take very long to half-roll what you want and then just check those files into source control. I rolled something similar for myself to update changes from our development systems to our live systems in just a few hours.
In our environment, we never change the DB manually: all changes are done by scripts at release time, and the scripts are kept in the version control system. One important part of this procedure is to be sure that all scripts can be run again against the same DB the scripts are idempotent?) without loss of data. For example, if you add a column, make sure that you do nothing if the column is already there.
Your comment about "suggestions have the flaw that they require the dev to follow some procedure" is really a tell-tale. It's not a flaw, it's a feature. Version control helps developers in following procedures and makes the procedures less painful. If you don't want to follow procedures, you don't need version control.
In SQL2000 generate each object into it's own file, then check them all into your source control. Let your source control handle the change history.
In SQL 2005, you'll need to write a bit of code to generate all objects into separate files.
In one project I arranged by careful attention in the design that all the important data in the database can be automatically recreated from external places. At startup the application creates the database if it is missing, and populates it from external data sources, using a schema in the application source code (and hence versioned with the application). The database store name (a sqlite filename although most database managers allow multiple databases) includes a schema version, and we increase the schema version whenever we commit a schema change. This means when we restart the application to a new version with a different schema that a new database store is automatically created and populated. Should we have to revert a deployment to an old schema then the new run of the old version will be using the old database store, so we get to do fast downgrades in the event of trouble.
Essentially, the database acts like a traditional application heap, with the advantages of persistence, transaction safety, static typing (handy since we use Python) and uniqueness constraints. However, we don't worry at all about deleting the database and starting over, and people know that if they try some manual hack in the database then it will get reverted on the next deployment, much like hacks of a process state will get reverted on the next restart.
We don't need any migration scripts since we just switch database filename and restart the application and it rebuilds itself. It helps that the application instances are sharded to use one database per client. It also reduces the need for database backups.
This approach won't work if your database build from the external sources takes longer than you will allow the application to be remain down.
If you are using .Net and like the approach Rails takes with Migrations, then I would recommend Migrator.Net.
I found a nice tutorial that walks through setting it up in Visual Studio. He also provides a sample project to reference.
We developed a custom tool that updates our databases. The database schema is stored in a database-neutral XML file which is then read and processed by the tool. The schema gets stored in SVN, and we add appropriate commentary to show what was changed. It works pretty well for us.
While this kind of solution is definitely overkill for most projects, it certainly makes life easier at times.
Our dbas periodically check prod against what is in SVN and delete any objects not under source control. It only takes once before the devlopers never forget to put something in source control again.
We also do not allow anyone to move objects to prod without a script as our devs do not have prod rights this is easy to enforce.
In order to track all the change like insert update and delete there will be a lot of overhead for the SVN.
It is better to track only the ddl changes like (alter, drop, create) which changes the schema.
You can do this Schema tracking easily by creating a table and a trgger to insert data to that table.
Any time you want u can get the change status by querying from that table
There are a lots of example here and here

Resources