I am not able to deliver one of the files in clearcase from dev stream to int stream. It failed and I did a undo checkout on int stream and it created a zero version . Now I cannot checkout the file and it says
Error: checked out the the file but could not copy data. unknown vob error.
I checked out and overwrote with another file and tried checkin and it says.
checkin failed: not a BDTM container
I tried to delete the zero version and branch and it says
cannot delete - not a BDTM container
I cannot open the file as well where it says no such file or directory. I can see versions on other branches but not this zero version
I got this ibm support page but it did not help me.
http://www-01.ibm.com/support/docview.wss?uid=swg21294695
Please advise.
A Binary Delta Type Manager container has a couple of details you need to be aware of:
It is stored as a gzipped file containing delta data, which is partially text and partially binary.
There is one container per branch with an existing version > 0.
The ...\branch\0 version (with the exception of \main\0) actually uses the container of the parent branch.
So, what does this mean in reality? You have at least one corrupt source container.
If you are checking out ...\branch\0, it is the container for the PARENT branch that is damaged. Do a cleartool dump on the parent version to get the container path, then skip down below
If you are checking out ...\branch\1 or later, dump the version you are trying to check out to get the source container path. Then...
Examine the file data and metadata:
If it's < 47 bytes (the minimum size of a .gz file), it is corrupt. If it's 0 bytes, then something interfered with the checkin process.
If it is larger, attempt to decompress it using gzip -d < {file path} > temp.txt, then open the file in a text editor. It should have a header containing a lot of version OID's and the like...
If gzip errors out, without decompressing the file at all, open the file and examine it's contents. It likely does not contain compressed data.
If gzip errors out with data integrity or premature-EOF errors, you most likely have a filesystem issue that damaged the file.
The resolution is going to be to either replace the container from backup, a remote replica, or to "ndata" the versions. This is not something that is best discussed on StackOverflow, but rather through a PMR.
Related
The statement
select * from MWlog
works correctly
while with the statement:
select * from MWlog where Probe= '230541'
Occurse the following error:
<<<<<<<<<<<<<<<<<<<<<<<<
select * from MWlog where Probe= '230541'
[LNA][Zen][SQL Engine][Data Record Manager]The application encountered an I/O error(Btrieve Error 2)
Hint:
We use the Zen-Version:
Zen Control Center
Zen Install Version 14.10.035.
Java Version 1.8.0_222.
Copyright © 2020 Actian Corporation
The same error occurs with an elder version:
If it only occurs when using the 'Probe' field, it is possible the index is corrupt. I would suggest rebuilding the file. You can use the BUTIL -CLONE and -COPY commands or the Rebuild Utility. I prefer BUTIL -CLONE/COPY.
The status 2 is defined (on https://docs.actian.com/zen/v14/index.html#page/codes%2F1statcod.htm%23ww178919) as:
2: The application encountered an I/O error This status code typically indicates a corrupt file, an error while reading from or writing to the disk. One of the following has occurred:
• The file is damaged, and you must recover it. See Advanced Operations Guide for more information on recovering files. •For pre-v6.0 data files, there is a large pre-image file inside a transaction, and there is not enough disk space for a write to the pre-image file.
• For pre-v6.0 data files, there is one pre-image file for multiple data files. For example, if you name the data files customer.one and customer.two, both files have pre-image files named customer.pre.
• For pre-v6.0 data files that are larger than 768 MB, there is a conflict among locking mechanisms. The file has not been corrupted. Your application can retry the operation until the conflict is resolved (when the competing application releases the lock your application requires).
• A pre-v6.0 Btrieve engine attempted to open a v6.x or later MicroKernel file.
• With Btrieve for Windows NT Server Edition v6.15.445, 32 bit Windows application may return Status 2 or “MKDE Terminated with Service Specific Error 0” after running an application for an extended period of time.
I am trying to push a .txt file into existing .tar file but not able to make it.
Is it possible through Camel.
I don't think is out-of-the-box possible.
If you use TarDataFormat you would have to untar the file first and then tar the individual files with the additional file again.
However, you can try to extend TarAggregationStrategy and adapt it to your need. The Strategy seems to add the content of new messages to an initially created tar file (method addEntryToTar).
In the given implementation this tar file is created on the arrival of the first message. Perhaps you only have to change this initial behaviour.
Perhaps it would even be possible to allow both behaviours (create new tar or use existing) and make this configurable.
See http://camel.apache.org/tar-dataformat.html (paragraph Aggregate how to use the TarAggregationStrategy and have a look at the source of TarAggregationStrategy on GitHub
I am using KDevelop 4.3.1 with Debian Wheezy.
My problem is, that for every file in my project directory KDevelop seems to create a backup file with the same name, ending with a tilde. This makes the project directorys look really unclear.
My question is if there is an option to hide these backup files (e.g. all files ending with a ~) in KDevelop? Meaning in the sidebar list of project files.
The backup files are created on save by the text editor component "Kate Part". To get rid of the *~ files, you have two options.
First, open the editor settings dialog through Settings > Configure Editor and then choose the Open/Save item, and then the Advanced tab.
Disable backups
To disable backups entirely, remove the checkbox for [ ] Local files.
Hide the backup files
To hide backups, just add the Prefix: ., so that every backup file is a hidden file. A backup file is then named e.g. .MyFile.cpp~.
The idea behind the backup files is to have the old version around in case the saved file is corrupted for whatever reason (system crash, file system error, ...?). In practice, you most probably do not need backups at all for the following reason:
When saving files, Kate uses the class KSaveFile (in Qt5 available as QSaveFile). In short, to avoid data loss, KSaveFile saves the file to a temporary file in the same directory as the target file, and on successful write finally moves the temporary file to the target filename.
In other words, saving files is pretty save and in theory should always work due to the atomic rename thanks to KSaveFile.
So the only use case for backup files are that you changed and saved a file by accident. In this case, the backup file still contains the old data provided you did not save twice.
Even more so: If you use a version control system (git, svn, ...), the usefulness of having backups is close to zero. That's also the reason why backups are disabled entirely in newer versions of the editor component.
If you use this filter (in the top of the file list):
[A-Z]*[A-Z]
You may only see files starting and ending with a letter so no hidden files (starting with a '.') and no backup files (ending with a '~') will be shown.
Be careful as any other file not starting or ending with a letter will also be hidden
Why do I get these .MKELEM files? How do I get rid of them?
I found some docs that said they are temp files created by ClearCase GUI when adding files to source control. But sometimes, they don't go away.
ADDITIONAL INFORMATION: I "get access denied" trying to delete or rename the .MKELEM. They seem to get created when I add new files to clearcase.
As mentioned in the mkelem tip page:
During the element-creation process, the view-private file is renamed to prevent a name collision that would affect other Rational® ClearCase® tools (for example, triggers on the mkelem operation). If this renaming fails, you see a warning message.
If a new element is checked out, mkelem temporarily renames the view-private file, using a .mkelem (or possibly, .mkelem.n) suffix. After the new element is created and checked out, mkelem restores the original name. This action produces the intended effect: the data formerly in a view-private file is now accessible through an element with the same name.
If mkelem does not complete correctly, your view-private file may be left under the .mkelem file name
The fact that a .mkelem stays can be, like LeopardSkinPillBoxHat mentions in his answer, because of a file blocked due to a process.
It can also happens:
in ClearCase view incorrectly protected (where ClearCase can checkout the new element, creating a version 0, but cannot check that element in.
alt text http://publib.boulder.ibm.com/infocenter/cchelp/v7r0m1/topic/com.ibm.rational.clearcase.dev.doc/topics/cc_dev/images/creating_element.gif
when a trigger prevents the checkin part of the new element creation
when the view actually exclude CHECKEDOUT versions! (no 'element * CHECKEDOUT' rule...)
on Solaris 10, due to an incorrect format in one of the ClearCase jvm config file. (ClearCase 7.1)
when add to source control is used on Windows in views mapped to a mount point (Mount points are persistent directories that point to disk volumes), only in old ClearCase 2002 or 2003.
See also the Under the hood: What happens when you add to source control article.
The .mkelem files are temporary files generated by ClearCase when adding a file to source control. If the file gets added succesfully, they are usually deleted. If something goes wrong during the process (e.g. it cannot create the branch specified in your config spec), the .mkelem file may be left behind.
I'm guessing that a process or service somewhere has a lock on the file. Rebooting should fix the problem. Or try using something like Process Explorer to see what may have locked the file.
Also, from this page:
.mkelem
Files being added to source control
from the GUI will use this extension
during an "Add to Source Control"
operation.
If you see this file in your view
during the mkelem process, that is OK.
If you still see the file after the
mkelem operation is complete, that is
not ok. You will likely need to rename
the file (remove the .mkelem
extension) and add it to source
control again. This can be seen when
your antivirus software is scanning
the mvfs. Refer to technote 1149511
Support Policy for Anti-Virus and
ClearCase for further information.
You may try the following from command prompt:
ct ls -l {filename}.mkelem
This will show the links,
then please try the following to link the actual file:
ct ln -c "scm:relink" {link} {actual filename}
Is there a tool that creates a diff of a file structure, perhaps based on an MD5 manifest. My goal is to send a package across the wire that contains new/updated files and a list of files to remove. It needs to copy over new/updated files and remove files that have been deleted on the source file structure?
You might try rsync. Depending on your needs, the command might be as simple as this:
rsync -az --del /path/to/master dup-site:/path/to/duplicate
Quoting from rsync's web site:
rsync is an open source utility that
provides fast incremental file
transfer. rsync is freely available
under the GNU General Public License
and is currently being maintained by
Wayne Davison.
Or, if you prefer wikipedia:
rsync is a software application for
Unix systems which synchronizes files
and directories from one location to
another while minimizing data transfer
using delta encoding when appropriate.
An important feature of rsync not
found in most similar
programs/protocols is that the
mirroring takes place with only one
transmission in each direction. rsync
can copy or display directory contents
and copy files, optionally using
compression and recursion.
#vfilby I'm the process of implementing something similar.
I've been using rsync for a while, but it gets funky when deploying to remote server with permission changes that are out of my control. With rsync you can choose to not include permissions, but they still endup being considered for some reason.
I'm now using git diff. This works very well for text files. Diff generates patches, rather then a MANIFEST that you have to include with your files. The nice thing about patches is that there is already an established framework for using and testing these patches before they're applied.
For example, with patch utility that comes standard on any *unix box, you can run the patch in dry-run mode. This will tell you if the patch that you're going to apply is actually going to apply before you run it. This helps you to make sure that the files that you're updating have not changed while you were preparing the patch.
If this is similar to what you're looking for, I can elaborate on my process.