MVFS error in a snapshot view - clearcase

I created a snapshot view using Rational ClearCase explorer.
After creating it, I tried compiling my code and got an MVFS error:
Unable to determine if the current working directory is in MVFS - no such device or address
When I searched the IBM website for the sake of eliminating this error, I found out that a snapshot view does not use the MVFS.
Why am I getting this error when Snapshot view does not use MVFS?

Recently there was a development in the solution of this issue !! We escalated this issue to IBM with the help of our client. They suggested us to use Dynamic views and we used them. To our surprise it was working fine and we are able to generate the executables. But the fact still remains that we are not able to use snapshot views !!
NOTE: This comment is just to share my knowledge and experience regarding this issue. :)

The path is xmalviv_view/NBA_axess_aup2/refsys/aup/aup61
A snapshot view is only accessible through a full local host path:
cd C:\path\to\snpashotview
clearmake is generally linked to and used in dynamic view (see "Derived objects in Clearcase").
Seeing that error message is not a surprise if you use it in a snapshot view and expect certain clearmake features:
The distinctive features of clearmake, such as build auditing, derived object sharing, and build avoidance, are supported in dynamic views only.

Related

SSDT/ VS2015 Database deployment -- publishing is ignoring nested views

I am attempting to gain some knowledge and use cases on SSDT Database development and deployments and strugling with some deployment issues.
Specifically when using nested views. For some unknown reason when attempting to deploy / publish the files in the project to a local / live db it seems to mess up the references in the views.
In this project i have the following views (example):
View1
View2
View3
View1 references View2 and View3 is referencing View1.
Building the project works fine. hoewever when i try to publish the database either by generating a dacpac by snapshot and publish it to the database or let Visual Studio generate an update script after (or not) comparing schemas i end up with an update script which tries to create the views in what seems to be the logical order in which they are stored in the project.
In this case View1 -> View2 -> View3. This means the publish fails because of reference issues. It can't create a view if the referenced view does not exist.
I have tried several options by adding the dacpac as reference in the project in various ways (same database, Same Server different Database w/ w/o database parameter) but in many cases i end up with a sql71561 / SQL71508 error which was another PITA to solve.
Online i can't find any good sources which explains how to work around this issue or which explains how this works properly.
Hopefully i can get some help here. If you need extra input from my side or want me to try something let me know.
Issue has been resolved by new insights. When trying to build the demo code to share with SO community i accidentally found the solution because i needed to clean up sensitive data(model) parts. Please let me elaborate on what was the issue.
The solution can be divided into two solutions:
Configuration of Database Project / Solution
the way references work
I'll share some insights on the both matters.
Configuration of Database Project / Solution
The Visual Studio solution contained a single project in which all views were placed. The actual tables and other database items were separated in different Solutions / Project.
Solution1
Project1
View1
View2
View3
Solution2
Project1
Tables
Security
Schemas
Etc...
The views itself contained three-part identifiers [Database].[Schema].[Table/View]. This was both on the items inside the project (views) and on the items outside the project (tables etc.).
By just using that one separate Project with just the views led to missing references. It was not able to find the other views nor the tables (further see references).
One solution to this issue was making sure both the views and the tables refenced are in the same Solution / Project. Even with using three-part identifiers Visual Studio ignores these because of the existence of all items in the same project / solution. It will detect the dependencies this way.
the way references work
The other way to solve it was using references the right way in visual studio. which is the second possible solution.
Considering the earlier example where the views were in a different solution as the other elements led to missing references. However adding a dacpac as a database reference with the setting Same Database led to conflicting references and SQL71508 element already exists in the model. This is true because it exists in the references dacpac and we try to create a new view with the same name referencing itself in the dacpac. This is because it sees the three-part reference as a variable for the dacpac.
When using the dacpac setting for same Server, Different Database it resolves the mixed up references because it sees the three-part identifiers as an external reference and thinks that you creating a local copy of a view which is looking at the external dacpac. in other words it will not detect the nested view because it thinks you referencing a separate database not inside the project.
When building the project this will not lead to errors and deployment will work. however since it thinks you are referencing an external data source (in the form of a dacpac) it does not see the reference to the other local views.
The solution to this (atleast this worked for us) is to use two-part identifiers in our views when we need a local reference to the other views. This way it will look at other files inside the project instead of the referenced dacpac.
Since it will detect the reference to the other local views it will build correctly and detect the dependancies in the views inside the local project. It will then create a good build order for all views.
I guess you could also assign a different Variable name to the referenced Dacpac, use three-part-identifiers all the way but change the ones in the external dacpac to use the newly assigned variable name. We have not tested this (but i will when i get back home tonight).
So in all this was a good learning experience in how Database references work inside Database projects when using partial projects or when you have split up the database into several projects / solutions. Now to understand this Pandora's black box and convert them into a future-proof solution :)

How to debug the error "The requested property 'current-activity' is not available"?

My Project is using IBM ClearCase as version control tool. I recently checked in some of the files, modified them and even checked out the modified files.
Next day when I try to check in some of the files, I am getting the following error :
the property is not available locally: stream
and
the property is not available locally: current-activity
Is there any way to resolve this error? I am stuck
I recently checked in some of the files, modified them and even checked out the modified files.
You actually checked out files, then modify them, then check them in.
You check out in order to make a file modifiable.
the property is not available locally: current-activity
The exact error message is actually:
The requested property 'current-activity' is not available
See this IBM technote:
Cause
The information within the .Rational folder had become stale or corrupted.
ClearQuest database connections require refreshing as they are no longer seen.
View tag (as stored in the registry server) is no longer visible on the Change Management (CM) Server when performing an Cleartool lsview.
This issue can also occur if the region used does not hold the right VOBs any longer. While trying to create new Views or change load rules the VOBs are missing. This happened due to a changed region map file.
So it depends on your OS, version of ClearCase, integration or not with clearquest.

intellij idea data sources doesn't see existing table in the database

intellij idea data sources doesn't see existing table in my mysql database, while Netbeans see it.
i've created a table in the database. When i create connection in intellij idea data sources, it sees my scheme, i do select it in "schemas and tables" but then i don't see it in the list.
every schema, but mine's. When i try to connect to it with netbeans or mysql workbench, it's just ok. same story with several databases, with root access, too. any table, but mine's.
what could be wrong?
i see question like mine's here, related to visual studio, no answer.
please, give a good clue
I had the same problem in IntelliJ IDEA 15.
I fixed it by right clicking the data source -> Properties -> Schemas -> Use legacy introspector.
I found the reason of problem for HSQLDB: There is IDEA bug (i have 11.0 version).
Create db with relative path (relative of MODULE), like this:
jdbc:hsqldb:file:db_file/testDBInMemory;shutdown=true;hsqldb.write_delay=‌​false;
f:\TestModule\db_file\
When added it to Data Sources, IDEA recognizes this path as relative of $IDEA_HOME$/bin folder.
f:\Program Files\JetBrains\IntelliJ IDEA 11.0\bin\db_file\
So you have two different data bases. And when JPA updated first, Data Source doesn't see update in other.
WORKAROUND:
use absolute path in file db url.
For the latecomers still experiencing this, Intellij IDEA also has the following features which might help you as they have me:
'Force Refresh' (ctrl + shift + F5): "The Force Refresh action clears the data source information from cache and loads it again from scratch."
'Forget Cached Schema' which clears the schema cache. Does require you to set which schema you want shown again.
Right click your datasource -> Diagnostics -> 'Force Refresh' or 'Forget Cached Schema'
Source: https://www.jetbrains.com/help/idea/cannot-find-a-database-object-in-the-database-tree-view.html
The answer for me was buried in a comment, so I'm posting it here. The symptom for me was seeing an error like this:
Unable to resolve table 'thetablename'
And the table name was highlighted in red.
The following fixes are all ways to trigger IntelliJ to refresh data sources:
CTRL+ALT+Y to refresh the db stuff, or
CTRL+F5, or
Click the database vertical tab on the right, expand, right-click on each DB connection and select 'Refresh'
Kudos to #ice for answering this earlier in a comment. I'm basically just elaborating and making sure this shows up as an answer.

Clearcase UCM - Unable to read change set entry for activity

Events till now
We have a CC 7.1.2.2, multisite setup where we do deliveries between 2 sites. Now when resuming a delivery at the destination site, we get this error :
Unable to read change set entry for activity "<activity name>". Unable to
convert diffs to elements. Unexpected error in deliver. Unable to perform merge.
Unable to do integration.
Then running checkvob -ucm shows some broken hyperliniks which the SCM support team fixes for us. IBM tech note says this is a synch issue.
Now the actual problem:
This has started happening on a regular basis suddenly and we know its NOT a synch issue between VOB and PVOB as the packets are getting synched properly. What I am interested in finding out is whether this could occur due to a specific set of user actions like deleting checked out versions etc. The key point is its not a one off thing and impacts our deliveries everyday. We are not able to find any concrete triggering actions or root cause
Any ideas ?
This has been linked to a multi-site sync issue from a long time now (here an example in 2005), and was also associated with a bug in CC multi-site 7.0.
But if you are really sure multi-site sync is not the issue, then it could be linked to "lost+found" issue, where elements could have been:
deleted (rmelem by an admin -- I know regular users in your setup don't have rmver or rmelem rights -- in order to clean the lost+found directory automatically, maybe through a ClearCase scheduled job or some kind of trigger?)
not selected because the config spec of views involved by your deliver are setup to not select the lost+found directory
Found out the issue; it was hidden synch issues indeed. What really was happening was that multisite synch was timing out for packets larger than 250 meg. This would create problems for bid inetersite deliveries where PVOB would synch over and VOBs would not. This was hidden as otherwise sync was happening properly.
Thanks VonC for the other inputs; I know you'd have pointed me to synch issues as a first measure had I not confirmed it wasn't the issue.

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources