I have two hard disks: C (40GB capacity left) and D (1TB capacity left).
My sqlite folder (SQLite3 Windows download files from tutorial) is in disk D.
I created a database called myDatabase.db in the sqlite folder and have created a table in it and populated the table from a CSV file. This was done successfully as I ran a few queries and they worked.
The size of the database is quite large (50GB) and I want to create an index for my table. I do the CREATE INDEX command and it starts - it creates a myDatabase.db-journal file in the folder next to the .db file.
However, from "This PC" view of the hard drives I can see that disk C is getting drained (from 40GB, going 39 38 etc incrementally), myDatabase.db in drive D is not getting bigger.
I dont want SQLite to use C when it doesnt make sense for it do it as sqlite and .db file are in disk D.
Any suggestions why this is happening ?
Thanks in advance for your time.
Related
I have a .d1 file and an old version of .db file of the same database.
When I open .d1 file using a text editor, the content of the file is readible so it seems the database can be regenerated.
First, I have used only .d1 file and run the command under proenv:
prostrct builddb c:\db\myDb
the generated .db file only contains:
0020
string and nothing else, where as my standard .db files contains 640 KB data in each one.
I have tried to unlock my database using:
proutil unlock c:\db\myDb -extents
and pressed y for recovery question. And the output is:
** Database has the wrong version number. (db: 0, pro: 150). (44)
When I have put old .db file I have for the same database, and run the same command:
proutil unlock c:\db\myDb -extents
and pressed y for recovery question, the output is:
Database c:\db\myDb uses 32-bit dbkeys. It cannot be unlocked by this codebase. (13888)
Use the 10.1A prostrct utility to unlock this database. (13889)
I haven't seen the 10.1A version more than 10 years now and I'm pretty sure the version of the .d1 file is 10.2A.
If you direct me to the way to recover the database, it would be much appriciated.
If the .d1 file is truly 10.2a and you are using 10.2a to try to open it and the only thing missing is the .db file then you can properly recreate the .db file with:
prostrct builddb dbname
But if you are missing other critical files (such as the .b1 file) this will not work.
If you really only have the .d1 file then you almost certainly do not have enough pieces to work with.
one of my database has multiple log files
log.ldf - 40 GB ( on D: drive)
log2.ldf -70 GB (on S: Drive)
log3.ldf -100 GB ( on L:Drive)
which log file SQL Server will pick first. is SQL server will follow any order to pick the log file ?Can we control this ?
I believe you can't control into which file LOG info will be written.
You should not concentrate only on the BIGGEST file, but on the FASTEST
General advise would be to have ONLY two files:
- First file, as big as possible on the FASTEST drive (on SSD). Set MAXSIZE to the file size, so it won't grow anymore.
- Second file, as small as possible on big drive, where it can grow in case the first file is full.
Your task would be to monitor your second file size and if it starts to grow then make log backups more often and shrink that file back.
If you want to see how your log files are used, you can use following DBCC command:
DBCC LOGINFO ();
I'm trying to rename a log file named appname.log into the form appname_DDMMYY.log for archiving purposes and recreate an empty appname.log for further writing. When doing this in Windows 7 using C++ and either WinAPI or Qt calls (which may be the same internally) the newly created .log file strangely inherits the timestamps (last modified, created) from the renamed file.
This behaviour is also observable when renaming a file in Windows Explorer and creating a file with the same name quickly afterwards in the same directory. But it has to be done fast. After clicking on "new Text File" the timestamps are normal but after renaming they change to the timestamps the renamed file had or still has.
Is this some sort of Bug? How can I rename a file and recreate it shortly afterwards without getting the timestamps messed up?
This looks like it is by design, perhaps to try to preserve the time for "atomic saving." If an application does something like (save to temp, delete original, rename temp to original) to eliminate the risk of a mangled file, every time you saved a file the create time would increase. A file you have been editing for years would appear to have been created today. This kind of save pattern is very common.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724320(v=vs.85).aspx
If you rename or delete a file, then restore it shortly thereafter, Windows searches the cache for file information to restore. Cached information includes its short/long name pair and creation time.
Notice that modification time is not restored. So after saving the file appears to have been modified and the creation time is the same as before.
If you create "a-new" and rename it back to "a" you get the old creation time of "a". If you delete "a" and recreate "a" you get the old creation time of "a".
This behaviour is called "File Tunneling". File Tunneling is allow "...to enable compatibility with programs that rely on file systems being able to hold onto file meta-info for a short period of time". Basically backward compatibility for older Windows systems that use a "safe save" function that involved saving a copy of the new file to a temp file, deleting the original and then renaming the temp file to the original file.
Please see the following KB article: https://support.microsoft.com/en-us/kb/172190 (archive)
As a test example, create FileA, rename FileA to FileB, Create FileA again (within 15 seconds) and the creation date will be the same as FileB.
This behaviour can be disabled in the registry as per the KB article above. This behaviour is also quite annoying when "forensicating" Windows machines.
Regards
Adam B
Here's a simple python script that repro's the issue on my Windows7 64bit system:
import time
import os
def touch(path):
with open(path, 'ab'):
os.utime(path, None)
touch('a')
print " 'a' timestamp: ", os.stat('a').st_ctime
os.rename('a', 'a-old')
time.sleep(15)
touch('a')
print "new 'a' timestamp: ", os.stat('a').st_ctime
os.unlink('a')
os.unlink('a-old')
With the sleep time ~15 seconds I'll get the following output:
'a' timestamp: 1436901394.9
new 'a' timestamp: 1436901409.9
But with the sleep time <= ~10 seconds one gets this:
'a' timestamp: 1436901247.32
new 'a' timestamp: 1436901247.32
Both files... created 10 seconds apart have the time created-timestamp!
I was working on kinda exploration of File Allocation Table recovery last couple of weeks. My purpose is to locate a possibly deleted file by its signature (for example, ZIP file by "50 4B 03 04" bytes) and recover the whole thing to search inside of it.
I've explored there's a problem with FAT: file system uses allocation table indicies for both cluster chain storing and deleted files marking, making files recovery, at first sight, impossible.
But there's hell of a recovery software advertising promising recovery of files deleted from FAT file system. So, there might be a workaround, I assume.
I've found that we can successfully recover files continuously located on disk. First cluster gives us an index, and index address value gives us strong possiblity of finding a directory entry where file size is stored. But is it the end? I'd like to recover fragmented files as well, but can't find the way.
May anyone know a workaround and help me here a bit, please?
FAT file system uses a directory entry for each file and folder. It shows starting cluster, filename, date and size. To access file, system looks in directory finds file and notes the starting cluster. Then it goes to the FAT (file allocation table) cluster that corresponds to the starting cluster. The starting cluster entry contains the cluster number of the next cluster. The next cluster entry points to the next cluster and so on until you come to an end of file marker which means this is the last cluster used by the file.
When you delete a file or folder. It locates the directory it resides in and changes the 1st letter of the file or folder name entry to E6 hex (not sure if its E6 or something slightly different) and it deletes the FAT chain.
That is why you can recover only contiguous files in FAT system once a file is deleted. All data recovery utilities will use this method. None other available unless you can find traces of the FAT with correct cluster chains still in place.
I have a set of .DAT files present along side a set of .IDX files with the same name.
The goal is to be able to open these files and read its contents, parsing it into a new format. The problem: I have no idea what database the data is being stored in! The files contain no headers or clues, they are binary, and the resource from which I have received these has no idea as to its storage mechanism.
So the question is: What are some common databases which store databases in .DAT files and store their indexes in .IDX files with the same name? Is there an application I can use in Linux or Windows which can detect the database?
EDIT :-
File names:
price.dat
price.idx
Here is a hex dump of the beginning of the .DAT file:
030D04806420500FFE3E0500002078581001C000738054E0C0099804138100402550080442090082403C101F7406010080C0A010201002010C006FC0246C0403FE00B041C051F0091BFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FE5FEFF47C06608480FA91F003C0213101F1BFDFE804220100F500D2A00388430801E04028D4390D128B46804024010A067269FCA546003C0844060E11F084B9E1377850
Here is a hex dump of the beginning of the .IDX file:
030D04805820100FFD7E0000397FEB60050410007300246A3060068220009BE0401030088B3903F740E010C80402410281402030094004C708004DC058880FFC052F015EBFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FFE108F8447E11AF848FE12CF84D7E13EF851FE150F8567E162F85AFE174F85F7E186F863FE198F8687E1AAF86CFE1BCF8717E1CEF875FE1E0F87A7E1F2F87EF5FEFF005E30901714
Both files uniquely start out with 030D04806420500FF wonder if this is a good start?
Did a quick search on Google but it didn't return anything...
END EDIT :-
Any other ideas?
Thanks much in advance!
There is a faircom ODBC driver called 'ctreeODBC_RO.exe' which should be capable.