I have a situation where my HSQL database has stopped working citing that it has exceeded the size limit
org.hsqldb.HsqlException: error in script file line: 41 file input/output errorerror org.hsqldb.HsqlException: wrong database file version: requires large database support opening file - file hsql/mydb.data;
When i checked the size of the data file using "du -sh" it was only around 730MB, but when i performed a "ls -alh" it gave me a shocking size of 16G which explains why HSQL probably reports it as a 'large database'. So, the data file seems to be a "sparse file"
But,nobody did change the data file to a sparse file, does HSQL maintain the data file as a sparse file? Or has the file system marked it as sparse?
How do i work around this to get back my HSQL database without corrupting data in it? I was thinking on using the hsqldb.cache_file_scale property, but it would still mean that I would hit the problem again when the file grows to 64G
Just in case if it matters I am running it on a Debian 3.2 box and it runs on Java 7u25.
You need to perform CHECKPOINT DEFRAG to compact the file from time to time.
When a lot of data is deleted, the space in the .data file is lost. The above command rewrites the current .data file to a much smaller new file.
If the file has already grown very large, or if you need to have a huge database, you can connect with the hsqldb.large_data=true property that enables very large databases.
http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html
Related
When I loaded file/s into Snowflake Stage i see difference in number of bytes loaded compared to the files in my local system, does anyone know the reason for this issue ? How it can be resolved.
File size in my local is 16622146 bytes, after loaded into stage it shows as 16622160 bytes, i have checked with .csv and .txt file types. (I know .txt file is not supported in snowflake).
I compressed the file and loaded into snowflake stage using snowsql using put command.
When loading small files from the local file system, snowflake automatically compresses the file. Please try that option.
Refer to this section of the documentation
https://docs.snowflake.com/en/user-guide/data-load-prepare.html#data-file-compression
This will help you to avoid any data corruption related issues during the compression.
Thanks
Balaji
In my embedded system I am using a CH376 IC (PDF link) for file handling. I am able to detect a Flash Disk, but not able to read the excel file created by Microsoft Excel. The excel file is created on the PC and copied in the Flash Disk.
I want to create a database in an Excel file on PC and after creating it, I want to upload in to my embedded system for this I need read the file created.
Please help me to read the file.
The .xls and .xlsx file formats are both extremely complex. Parsing them is unlikely to be feasible in an embedded environment. (In particular, .xlsx is a PKZIP archive containing XML data -- you will need a minimum of 32 KB of SRAM just to decompress the file containing the cell data, and even more to parse it.)
Use a different file format. Consider using .csv, for instance -- it's just a text file, with one row of data on each line, so it's pretty straightforward to work with.
I have a database file with .DB file extension. I have been googling and it looks like SQLite. I tried to connect to it using SQLite and SQLite3 drivers and I am getting an error "File is encrypted or not a database".
So I dont know if file is encrypted or it is not an SQLite database. Are there any other options what should the .DB extension should be? How do I find out that file is encrypted?
I tried to open it in the text editor and it is mostly a mess of charaters and some times there are words visible. I have uploaded the file here: http://cl.ly/3k0E01373r3v182a3p1o for the closer look.
Thank you for your hints and ideas what to do and how to work with this file.
Marco Pontello's TrID is a great way to determine the type of any file.
TrID is simple to use. Just run TrID and point it to the file to be analyzed. The file will be read and compared with the definitions in the database. Results are presented in order of highest probability.
Just download the executable and the latest definitions file into the same directory and then run TrID:
trid.exe "path/to/file.xyz"
It will output a list of possible file types for the file with a confidence rating. Here's a screenshot of using TrID to analyze a SQLite database file:
There's also a GUI version called TrIDNet:
If you're on a Unix-like platform (Mac OS X, Linux, etc), you could try running file myfile.db to see if that can figure out what type of file it is. The file utility will inspect the beginning of the file, looking for any clues like magic numbers, headers, and so on to determine the type of the file.
Look at the first 30 bytes of the file (open it in Notepad, Notepad++ or another simple text viewer). There's usually some kind of tag or extension name in there.
Both SQLite 2 and SQLite 3 have a very clear message: SQLite format 3 for SQLite 3 (obviously) and This file contains an SQLite 2.1 database for SQLite 2.
Note that encrypted SQLite databases don't have a header like that since the entire file is encrypted. See siyw's comment below.
On a Unix-like system (or Cygwin under Windows), the strings utility will search a file for strings, and print them to stdout. Might help you narrow the field.
There are a lot of programs besides database programs that use a "db" extension, including
ArcView Object Database File (ESRI)
MultiEdit
Netscape
Palm
and so on. Google "file extensions" for some sites that catalog file extensions and the programs that use them.
There's no conclusive way to know, because SQLite encrypts the entire database file, including the header.
Further, there's not a lot of difference to you, except for possible error text to a user if you're prompting them for a password.
Its an web application, statement generation & reporting system dealing in huge numbers upto terabytes data. in this application we are using shell scripting for creating password protected zip file. When we tested this application on our development Server it is working fine. This script or zip file creation command are working properly for some server but not working on another server with similar hardware and OS. If we use file with huge sizes or folders (having more than 400 files) in that case this command failed. any pointers plz?
Some file systems have limits on the size of files (usually 2Gb or 4Gb) and even the number of files in a directory. It might be worth looking at what the underlying file systems are on the working and non-working servers, to see if there's a pattern.
Is there a good way to analyze VMware delta VMDK files between snapshots to list changed blocks, so one can use a tool to tell which NTFS files are changed?
I do not know a tool that does this out of the block, but it should not be so difficult.
The VMDK file format specification is available and the format is not that complex. As far as I remember, a VMDK file consists of a lot of 64k block. At the beginning of the VMDK file there is some directory that contains the information where a logical block is stored in the physical file.
It should be pretty easy to detect there a logical block is stored in both files and than compare the data in the two version of the VMDK file.