Problem with SQl_satements in Pervasive-Database - pervasive

The statement
select * from MWlog
works correctly
while with the statement:
select * from MWlog where Probe= '230541'
Occurse the following error:
<<<<<<<<<<<<<<<<<<<<<<<<
select * from MWlog where Probe= '230541'
[LNA][Zen][SQL Engine][Data Record Manager]The application encountered an I/O error(Btrieve Error 2)
Hint:
We use the Zen-Version:
Zen Control Center
Zen Install Version 14.10.035.
Java Version 1.8.0_222.
Copyright © 2020 Actian Corporation
The same error occurs with an elder version:

If it only occurs when using the 'Probe' field, it is possible the index is corrupt. I would suggest rebuilding the file. You can use the BUTIL -CLONE and -COPY commands or the Rebuild Utility. I prefer BUTIL -CLONE/COPY.
The status 2 is defined (on https://docs.actian.com/zen/v14/index.html#page/codes%2F1statcod.htm%23ww178919) as:
2: The application encountered an I/O error This status code typically indicates a corrupt file, an error while reading from or writing to the disk. One of the following has occurred:
• The file is damaged, and you must recover it. See Advanced Operations Guide for more information on recovering files. •For pre-v6.0 data files, there is a large pre-image file inside a transaction, and there is not enough disk space for a write to the pre-image file.
• For pre-v6.0 data files, there is one pre-image file for multiple data files. For example, if you name the data files customer.one and customer.two, both files have pre-image files named customer.pre.
• For pre-v6.0 data files that are larger than 768 MB, there is a conflict among locking mechanisms. The file has not been corrupted. Your application can retry the operation until the conflict is resolved (when the competing application releases the lock your application requires).
• A pre-v6.0 Btrieve engine attempted to open a v6.x or later MicroKernel file.
• With Btrieve for Windows NT Server Edition v6.15.445, 32 bit Windows application may return Status 2 or “MKDE Terminated with Service Specific Error 0” after running an application for an extended period of time.

Related

not able to access checkout or checkin file in clearcase

I am not able to deliver one of the files in clearcase from dev stream to int stream. It failed and I did a undo checkout on int stream and it created a zero version . Now I cannot checkout the file and it says
Error: checked out the the file but could not copy data. unknown vob error.
I checked out and overwrote with another file and tried checkin and it says.
checkin failed: not a BDTM container
I tried to delete the zero version and branch and it says
cannot delete - not a BDTM container
I cannot open the file as well where it says no such file or directory. I can see versions on other branches but not this zero version
I got this ibm support page but it did not help me.
http://www-01.ibm.com/support/docview.wss?uid=swg21294695
Please advise.
A Binary Delta Type Manager container has a couple of details you need to be aware of:
It is stored as a gzipped file containing delta data, which is partially text and partially binary.
There is one container per branch with an existing version > 0.
The ...\branch\0 version (with the exception of \main\0) actually uses the container of the parent branch.
So, what does this mean in reality? You have at least one corrupt source container.
If you are checking out ...\branch\0, it is the container for the PARENT branch that is damaged. Do a cleartool dump on the parent version to get the container path, then skip down below
If you are checking out ...\branch\1 or later, dump the version you are trying to check out to get the source container path. Then...
Examine the file data and metadata:
If it's < 47 bytes (the minimum size of a .gz file), it is corrupt. If it's 0 bytes, then something interfered with the checkin process.
If it is larger, attempt to decompress it using gzip -d < {file path} > temp.txt, then open the file in a text editor. It should have a header containing a lot of version OID's and the like...
If gzip errors out, without decompressing the file at all, open the file and examine it's contents. It likely does not contain compressed data.
If gzip errors out with data integrity or premature-EOF errors, you most likely have a filesystem issue that damaged the file.
The resolution is going to be to either replace the container from backup, a remote replica, or to "ndata" the versions. This is not something that is best discussed on StackOverflow, but rather through a PMR.

Omnimark file processing fails

We have the omnimark script that takes 2gb sgml file size as input and output the file which is around 2.2 gb.The script is called from unix shell script and we are facing issues that sometimes script runs successfully and sometime it just aborted with no error....any idea or suggestions how to debug this?
I have seen this type of issue before in running OmniMark v5.3 when the script bombs due to lack of server resources/memory.
If you've specified writing to a log file, e.g. using -log logfilename.txt, then you would see something like an error code #3000 "insufficient memory error".
http://developers.omnimark.com/docs/html/error/3000.htm
If no log file, then initial step would be to run the script in a console session so that any such abort message is visible.
Stilo have a page listing fixes in various versions of OmniMark
http://developers.omnimark.com/docs/html/concept/806.htm
This mentions a variety of memory-related issues in various versions of the software (e.g. use of certain translate rules) which may help some investigation.
Alternatively, you could add to the script writing to a debug log file (with a global switch to activate debug on or off (so you don't waste further I/O resources when you don't need to)). Debug log file should be unbuffered. At certain breakpoints in the script add a message. The more verbose the better at narrowing down where/when the error is, but with the size of file I suggest it's a I/O or memory error.
Also depends what version of OmniMark you're using.

HSQL data file sparse?

I have a situation where my HSQL database has stopped working citing that it has exceeded the size limit
org.hsqldb.HsqlException: error in script file line: 41 file input/output errorerror org.hsqldb.HsqlException: wrong database file version: requires large database support opening file - file hsql/mydb.data;
When i checked the size of the data file using "du -sh" it was only around 730MB, but when i performed a "ls -alh" it gave me a shocking size of 16G which explains why HSQL probably reports it as a 'large database'. So, the data file seems to be a "sparse file"
But,nobody did change the data file to a sparse file, does HSQL maintain the data file as a sparse file? Or has the file system marked it as sparse?
How do i work around this to get back my HSQL database without corrupting data in it? I was thinking on using the hsqldb.cache_file_scale property, but it would still mean that I would hit the problem again when the file grows to 64G
Just in case if it matters I am running it on a Debian 3.2 box and it runs on Java 7u25.
You need to perform CHECKPOINT DEFRAG to compact the file from time to time.
When a lot of data is deleted, the space in the .data file is lost. The above command rewrites the current .data file to a much smaller new file.
If the file has already grown very large, or if you need to have a huge database, you can connect with the hsqldb.large_data=true property that enables very large databases.
http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html

Database errors in Quantum Grid demos in Delphi XE Professional

Whenever I open one of the Quantum Grid demos in Delphi XE Pro (on Windows 7 32-bit), the following error is displayed for every table (I think) in the project:
error message http://www.tranglos.com/img/qgerror.png
The message is:
Network initialization failed.
File or directory does not exist.
File: C:\PDOXUSRS.NET
Permission denied.
Directory: C:\.
I understand permission issues writing to c:\, but the result is that while I can build and run the demo projects, no data is displayed, which makes the demos rather useless. And what kind of database writes its configuration to c:\ directory in the 21st century anyway? :) (Yes, I know very little about Paradox databases, but I won't ever be using one either. I just want to learn how to use the grid.)
Using BDE Administrator I've tried changing the Paradox "NET DIR" value to a folder with write permissions on the C drive. Result: now the database tables cannot find their data:
Path not found.
File: C:..\..\Data\GENRES.DB.
...and the unhelpfully truncated path gives no indication where the files are expected to be.
Is there a way to work around the problem so that the demos can load their sample data correctly?
Did you install the BDE correctly? It should use the DBDEMOS files. Do you see such an alias in the BDE administration utility? Can you open that database in one of the Delphi demos?
The BDE is not a XXI century database, it was developed twenty years ago and never upgraded lately. It's an obsolete tecnology, but because it comes still with every release of Delphi with a known database it is still often used in demos because nothing new has to be installed.
Anyway that file is not its configuration file. It's a sharing lock file to allow more than one user to use the database concurrently. Because it is a file based database without a central server, it has to use such kind of shared files. Usually its position is changed to a network share, but it defaults to C:\ for historical reasons.
Anyway it's not only the BDE still attempting to write in the prong directories. I still see a full bunch of applications attempting to write to C:\ (especially logs) or other read-only positions.
Using BDE Admin to change the location for PDOXUSRS.NET helped, but it wasn't sufficient. DevExpress did the right thing in specifying a relative folder for the data location, and the relative folder seems perfectly allright, but for some reason the DB can't find it.
Solution: under the \Demos\ folder find all the *.dfm files that contain the string
..\..\Data
and replace that string with the absolute path to the demos folder. That done, all the demos open correctly.
I know this message from our own applications. It has to do with security measures introduced with Windows Vista. The operating system trying to protect critical files denies access to them. There is a method how to bypass this mechanism without compromising security. Try to run your application in compatibility mode. When application is running in compatibility mode, read / write operations from / to system folders are redirected to "safe" directories located in C:\Users[Current User]\AppData\Local\VirtualStore.
More info on http://www.windowsecurity.com/articles/Protecting-System-Files-UAC-Virtualization-Part1.html.

shell script for password protected zip file creation

Its an web application, statement generation & reporting system dealing in huge numbers upto terabytes data. in this application we are using shell scripting for creating password protected zip file. When we tested this application on our development Server it is working fine. This script or zip file creation command are working properly for some server but not working on another server with similar hardware and OS. If we use file with huge sizes or folders (having more than 400 files) in that case this command failed. any pointers plz?
Some file systems have limits on the size of files (usually 2Gb or 4Gb) and even the number of files in a directory. It might be worth looking at what the underlying file systems are on the working and non-working servers, to see if there's a pattern.

Resources