SQLite3: Delete record without overwriting it with nullbytes - database

According to this explanation[1] how it is possible to recover deleted records from an SQLite-database, two things happen when you delete a record: The area containing the deleted record is added to the free list (as a free block) and the "header" in the content area is overwritten.
The content itself is not (or only a few bytes) overwritten, so e.g. by opening the database file with a hex editor you can still see parts of the deleted record.
So to check whether this is true or not, here’s a minimal example:
sqlite3 example.db
sqlite> CREATE TABLE 'test'(aColumn VARCHAR(25));
sqlite> INSERT INTO test VALUES('AAAAAAAAAA');
sqlite> INSERT INTO test VALUES('BBBBBBBBBB');
sqlite> INSERT INTO test VALUES('CCCCCCCCCC');
sqlite> DELETE FROM test WHERE aColumn = 'BBBBBBBBBB';
What I’m expecting now is that – opening the file with a hex editor – between CCCC… and AAAA… there are still some Bs (but some bytes before the Bs or the first few Bs should have been overwritten).
What really happens is that the whole area with Bs has been overwritten with nullbytes.
This happens with larger databases, too. The content area is always cleaned with nullbytes and because of this it is impossible to recover anything.
Is there any option or flag which I have to use to get the results shown in the blogpost?
I need to create my SQLite-databases in that way, to be able to recover deleted records (or parts of them) with forensic tools.
[1]http://sandersonforensics.com/forum/content.php?222-Recovering-deleted-records-from-an-SQLite-database

Your version of SQLite happens to be compiled with SQLITE_SECURE_DELETE enabled.

Related

How download more than 100MB data into csv from snowflake's database table

Is there a way to download more than 100MB of data from Snowflake into excel or csv?
I'm able to download up to 100MB through the UI, clicking the 'download or view results button'
You'll need to consider using what we call "unload", a.k.a. COPY INTO LOCATION
which is documented here:
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-location.html
Other options might be to use a different type of client (python script or similar).
I hope this helps...Rich
.....EDITS AS FOLLOWS....
Using the unload (COPY INTO LOCATION) isn't quite as overwhelming as it may appear to be, and if you can use the snowSQL client (instead of the webUI) you can "grab" the files from what we call an "INTERNAL STAGE" fairly easily, example as follows.
CREATE TEMPORARY STAGE my_temp_stage;
COPY INTO #my_temp_stage/output_filex
FROM (select * FROM databaseNameHere.SchemaNameHere.tableNameHere)
FILE_FORMAT = (
TYPE='CSV'
COMPRESSION=GZIP
FIELD_DELIMITER=','
ESCAPE=NONE
ESCAPE_UNENCLOSED_FIELD=NONE
date_format='AUTO'
time_format='AUTO'
timestamp_format='AUTO'
binary_format='UTF-8'
field_optionally_enclosed_by='"'
null_if=''
EMPTY_FIELD_AS_NULL = FALSE
)
overwrite=TRUE
single=FALSE
max_file_size=5368709120
header=TRUE;
ls #my_temp_stage;
GET #my_temp_stage file:///tmp/ ;
This example:
Creates a temporary stage object in Snowflake, which will be discarded when you close your session.
Takes the results of your query and loads them into one (or more) csv files in that internal temporary stage, depending on size of your output. Notice how I didn't create another database object called a "FILE FORMAT", it's considered a best practice to do so, but you can do these one off extracts without creating that separate object if you don't mind having the command be so long.
Lists the files in the stage, so you can see what was created.
Pulls the files down using the GET, in this case this was run on my mac and the file(s) were placed in /tmp, if you are using Windoz you will need to modify a little bit.

Bulk import only works with 1 column csv file

Whenever I try to import a CSV file into sql server with more than one column I get an error (well, nothing is imported). I know the file is terminated fine because it works with 1 column ok if I modify the file and table. I am limiting the rows so it never gets to the end, the line terminator is the correct and valid one (also shown by working when having 1 column only).
All I get is this and no errors
0 rows affected
I've also check all the other various questions like this and they all point to a bad end of file or line terminator, but all is well here...
I have tried quotes and no quotes. For example, I have a table with 2 columns of varchar(max).
I run:
bulk insert mytable from 'file.csv' WITH (FIRSTROW=2,lastrow=4,rowterminator='\n')
My sample file is:
name,status
TEST00040697,OK
TEST00042142,OK
TEST00042782,OK
TEST00043431,BT
If I drop a column then delete the second column in the csv ensuring it has the same line terminator \n, it works just fine.
I have also tried specifying the 'errorfile' parameter but it never seems to write anything or even create the file.
Well, that was embarrassing.
SQL Server in it's wisdom is using \t as the default field terminator for a CSV file, but I guess when the documentation says 'FORMAT = 'CSV'' it's an example and not the default.
If only it produced actual proper and useful error messages...

ORA-01401: inserted value too large for column CHAR

I'm pretty new to SQL Oracle and my class is going over Bulk Loading at the moment. I pretty much get the idea however I am having a little trouble getting it to read all of my records.
This is my SQL File;
PROMPT Creating Table 'CUSTOMER'
CREATE TABLE CUSTOMER
(CustomerPhoneKey CHAR(10) PRIMARY KEY
,CustomerLastName VARCHAR(15)
,CustomerFirstName VARCHAR(15)
,CustomerAddress1 VARCHAR(15)
,CutomerAddress2 VARCHAR(30)
,CustomerCity VARCHAR(15)
,CustomerState VARCHAR(5)
,CustomerZip VARCHAR(5)
);
Quick and easy. Now This is my Control File to load in the data
LOAD DATA
INFILE Customer.dat
INTO TABLE Customer
FIELDS TERMINATED BY"|"
(CustomerPhoneKey, CustomerLastName, CustomerFirstName, CustomerAddress1 , CutomerAddress2, CustomerCity, CustomerState, CustomerZip)
Then the Data File
2065552123|Lamont|Jason|NULL|161 South Western Ave|NULL|NULL|98001
2065553252|Johnston|Mark|Apt. 304|1215 Terrace Avenue|Seattle|WA|98001
2065552963|Lewis|Clark|NULL|520 East Lake Way|NULL|NULL|98002
2065553213|Anderson|Karl|Apt 10|222 Southern Street|NULL|NULL|98001
2065552217|Wong|Frank|NULL|2832 Washington Ave|Seattle|WA|98002
2065556623|Jimenez|Maria|Apt 13 B|1200 Norton Way|NULL|NULL|98003
The problem is, that only the last record
2065556623|Jimenez|Maria|Apt 13 B|1200 Norton Way|NULL|NULL|98003
is being loaded in. The rest are in my bad file
So I took a look at my log file and the errors I'm getting are
Record 1: Rejected - Error on table CUSTOMER, column CUSTOMERZIP.
ORA-01401: inserted value too large for column
Record 2: Rejected - Error on table CUSTOMER, column CUSTOMERZIP.
ORA-01401: inserted value too large for column
Record 3: Rejected - Error on table CUSTOMER, column CUSTOMERZIP.
ORA-01401: inserted value too large for column
Record 4: Rejected - Error on table CUSTOMER, column CUSTOMERZIP.
ORA-01401: inserted value too large for column
Record 5: Rejected - Error on table CUSTOMER, column CUSTOMERZIP.
ORA-01401: inserted value too large for column
Table CUSTOMER: 1 Row successfully loaded. 5 Rows not loaded due
to data errors. 0 Rows not loaded because all WHEN clauses were
failed. 0 Rows not loaded because all fields were null.
Onto the question. I see that CustomerZip is the problem, and initially I had it as CHAR(5) -- I did this because my understanding of the data type, is that for numeric values like a zip code, I would not be doing arithmetic operations with it, so it would be better to store it as CHAR. Also I did not use VARCHAR2 (5) initially, because seeing as it is a zip code, I don't want the value to vary, It should always be 5. Now maybe I'm just misunderstanding this. So if there is anyone that can clear that up, that would be awesome.
My second question, is "How do I fix this problem?" Given the above understanding of these data types, it doesn't make sense why CHAR(5) NOR VARCHAR2(5) work. As I am getting the same errors for both.
It makes even less sense that one record(the last one) actually works.
Thank you for the help in advance
Your data file has extra, invisible characters. We can't see the original but presumably it was created in Windows and has CRLF new line separators; and you're running SQL*Loader in a UNIX/Linux environment that is only expecting line feed (LF). The carriage return (CR) characters are still in the file, and Oracle is seeing them as part of the ZIP field in the file.
The last line doesn't have a CRLF (or any new-line marker), so on that line - and only that line - the ZIP field is being seen as 5 characters, For all the others it's being seen as six, e.g. 98001^M.
You can read more about the default behaviour in the documentation:
On UNIX-based platforms, if no terminator_string is specified, then SQL*Loader defaults to the line feed character, \n.
On Windows NT, if no terminator_string is specified, then SQL*Loader uses either \n or \r\n as the record terminator, depending on which one it finds first in the data file. This means that if you know that one or more records in your data file has \n embedded in a field, but you want \r\n to be used as the record terminator, then you must specify it.
If you open the data file in an edit like vi or vim, you'll see those extra ^M control characters.
There are several ways to fix this. You can modify the file; the simplest thing to do that is copy and paste the data into a new file created in the environment you'll run SQL*Loader in. There are utilities to convert line endings if you prefer, e.g. dos2unix. Or your Windows editor may be able to save the file without the CRs. You could also add an extra field delimiter to the data file, as Ditto suggested.
Or you could tell SQL*Loader to expect CRLF by changing the INFILE line:
LOAD DATA
INFILE Customer.dat "str '\r\n'"
INTO TABLE Customer
...
... though that will then cause problems if you do supply a file created in Linux, without the CR characters.
There is a utility, dos2unix, that is present on almost all UNIX machines. If you run it, you can output the data file with the DOS/Windows CRLF combination stripped.

Cannot update, READ-ONLY - Visual Foxpro

This is my code
CURSORSETPROP("Buffering",4)
SELECT course
APPEND BLANK
replace course.name with thisform.txt_course.value
this works fine during the development of the project, however,when i convert the project to .exe file, I cant write to the table and it brings the error
Cannot update the cursor course, since it is read-only
I've manage to find for what i thought was the solution which is the code below
CURSORSETPROP("Buffering",4)
SELE * FROM course INTO CURSOR cCourse READWRITE
APPEND BLANK
replace cCourse.name with thisform.txt_course.value
But the problem is that this code does not write to the table. I mean there was no error about being the table read-only, however there are no additional file added/appended to the table "course"
Can anyone lend me a hand here, i have no idea where to find a solution.
Vfp Error 111 itself, Cannot update the cursor ..., can be caused by a DBF table being included in the VFP Project Manager. If so, you can right click it in the PM and "Exclude".
There is however a mistake in the code you posted: The CURSORSETPROP line affects the alias in the current "work area", which could be anyone or even none, unless you would use the cAlias parameter of the function.
(Or, as a second best way, do the Select before the CursorSetProp().)

How to Dynamically render Table name and File name in pentaho DI

I have a requirement in which one source is a table and one source is a file. I need to join these both on a column. The problem is that I can do this for one table with one transformation but I need to do it for multiple set of files and tables to load into another set of specific files as target using the same transformation.
Breaking down my requirement more specifically :
Source Table Source File Target File
VOICE_INCR_REVENUE_PROFILE_0 VoiceRevenue0 ProfileVoice0
VOICE_INCR_REVENUE_PROFILE_1 VoiceRevenue1 ProfileVoice1
VOICE_INCR_REVENUE_PROFILE_2 VoiceRevenue2 ProfileVoice2
VOICE_INCR_REVENUE_PROFILE_3 VoiceRevenue3 ProfileVoice3
VOICE_INCR_REVENUE_PROFILE_4 VoiceRevenue4 ProfileVoice4
VOICE_INCR_REVENUE_PROFILE_5 VoiceRevenue5 ProfileVoice5
VOICE_INCR_REVENUE_PROFILE_6 VoiceRevenue6 ProfileVoice6
VOICE_INCR_REVENUE_PROFILE_7 VoiceRevenue7 ProfileVoice7
VOICE_INCR_REVENUE_PROFILE_8 VoiceRevenue8 ProfileVoice8
VOICE_INCR_REVENUE_PROFILE_9 VoiceRevenue9 ProfileVoice9
The table and file names are always corresponding i.e. VOICE_INCR_REVENUE_PROFILE_0 should always join with VoiceRevenue0 and the result should be stored in ProfileVoice0. There should be no mismatches in this case. I tried setting the variables with table names and file names, but it only takes on value at a time.
All table names and file names are constant. Is there any other way to get around this. Any help would be appreciated.
Try using "Copy rows to result" step. It will store all the incoming rows (in your case the table and file names) into a memory. And for every row, it will try to execute your transformation. In this way, you can read multiple filenames at one go.
Try reading this link. Its not the exact answer, but similar.
I have created a sample here. Please check if this is what is required.
In the first transformation, i read the tablenames and filenames and loaded it in the memory. After that i have used the get variable step to read all the files and table names to generate the output. [Note: I have not used table input as source anywhere, instead used TablesNames. You can replace the same with the table input data.]
Hope it helps :)

Resources