Dumping sqlite3 database for use in Titanium - database

So, for some reason, I am having a bit of trouble with my sqlite database. I'm trying to dump my database into a file that could be used in Titanium. I know about the .dump command, but when I try to use the instructions on the sqlite website:
A good way to make an archival copy of a database is this:
$ echo '.dump' | sqlite3 ex1 | gzip -c >ex1.dump.gz
and change the ex1.dump.gz to ex1.sqlite.gz, it gives a really messed up file that is useless. How can I dump my database so that I can then use it in my Titanium Studios mobile app?

Why do you dump the database file when you can simply copy it, i.e. use it as it is?
As explained here, sqlite databases are cross-platform:
A database in SQLite is a single disk file. Furthermore, the file
format is cross-platform. A database that is created on one machine
can be copied and used on a different machine with a different
architecture. SQLite databases are portable across 32-bit and 64-bit
machines and between big-endian and little-endian architectures.
On the other hand, you should be able to dump, compress you database like this:
echo '.dump' | sqlite3 foo.db | gzip -c > foo.dump.gz
and restore it in a new SQLite database:
gunzip -c foo.dump.gz | sqlite3 foo.new.db

.dump exports the contents of your database as a series of insert statements. If you run it without the gzip command you'll see plain text sql:
$ echo '.dump' | sqlite3 ex1
But you don't need to do that to use a SQLite database in Titanium Studios. It supports SQLite natively. Just copy the database file to your project directory and then use code like this to open it:
var db = Ti.Database.install('../products.sqlite','products');
var rows = db.execute('SELECT DISTINCT category FROM products');
More details here:
http://mobile.tutsplus.com/tutorials/appcelerator/titanium-mobile-database-driven-tables-with-sqlite/

Related

How to backup sqlite database?

What's the proper way to do it?
Do I just copy the .sq3 file?
What if there are users on the site and file is being written while it's being copied?
The sqlite3 command line tool features the .backup dot command.
You can connect to your database with:
sqlite3 my_database.sq3
and run the backup dot command with:
.backup backup_file.sq3
Instead of the interactive connection to the database, you can also do the backup and close the connection afterwards with
sqlite3 my_database.sq3 ".backup 'backup_file.sq3'"
Either way the result is a copy named backup_file.sq3 of the database my_database.sq3.
It's different from regularly file copying, because it takes care of any users currently working on the database. There are proper locks set on the database, so the backup is done exclusively.
.backup is the best way.
sqlite3 my_database .backup my_database.back
you can also try .dump command , it gives you the ability to dump the entire database or tables into a text file. If TABLE specified, only dump tables matching LIKE pattern TABLE.
sqlite3 my_database .dump > my_database.back
A good way to make an archival copy using dump and store, Reconstruct the database at a later time.
sqlite3 my_database .dump | gzip -c > my_database.dump.gz
zcat my_database.dump.gz | sqlite3 my_database
Also check this question Do the SQLite3 .backup and .dump commands lock the database?
For streaming replication of SQLite, check out Litestream.
Compared to using the sqlite3-backup command, this is automatic, and incremental.
If you need to restore from backup, the data will be a lot more up to date than if you did a regular backup every hour for example.
Short and simple answer would be
sqlite3 m_database.sq3 ".backup m_database.sq3.bak"

Easy way to view postgresql dump files?

I have a ton of postgresql dump files I need to peruse through for data. Do I have to install Postgresql and "recover" each one of them into new databases one by one? Or I'm hoping there's a postgresql client that can simply open them up and I can peek at the data, maybe even run a simple SQL query?
The dump files are all from a Postgresql v9.1.9 server.
Or maybe there's a tool that can easily make a database "connection" to the dump files?
UPDATE: These are not text files. They are binary. They come from Heroku's backup mechanism, this is what Heroku says about how they create their backups:
PG Backups uses the native pg_dump PostgreSQL tool to create its
backup files, making it trivial to export to other PostgreSQL
installations.
This was what I was looking for:
pg_restore db.bin > db.sql
Thanks #andrewtweber
Try opening the files with text editor - the default dump format is plain text.
If the dump is not plain text - try using pg_restore -l your_db_dump.file command. It will list all objects in the database dump (like tables, indexes ...).
Another possible way (may not work, haven't tried it) is to grep through the output of pg_restore your_db_dump.file command. If I understood correctly the manual - the output of pg_restore is just a sequence of SQL queries, that will rebuild the db.
In newer versions you need to specify the -f flag with a filename or '-' for stdout
pg_restore -f - dump_file.bin
I had this same problem and I ended up doing this:
Install Postgresql and PGAdmin3.
Open PGAdmin3 and create a database.
Right click the db and click restore.
Ignore file type.
Select the database dump file from Heroku.
Click Restore.
pg_restore -f - db.bin > db.sql
Dump files are usually text file, if Not compressed, and you can open them with a text editor. Inside you will find all the queries that allow the reconstruction of the database ...
If you use pgAdmin on Windows, can just backup the file as plain text, there is one option when you do backup instead of pg_dump in command line prompt.

extract from sqlite

i have a sqlite database created from the honeypot. the database contains malware files. how can i extract these files from the sqlite database. please if someone can help
You can dump the whole database with:
echo .dump | sqlite3 database.sqlite > database.dump
Or just view the structure with:
echo .schema | sqlite3 database.sqlite
To get the files, you'll probably need a small script to extract the BLOBs into files. Post the schema of the database if you need help.
The sqlite3 command can easily interrogate an sqlite3 database and the .dump command will allow you to dump a given table, and the .output command will let you select a filename for the output before dumping.
If the data came from a honeypot, be very careful about the tools you use to inspect the contents: flaws have been found in terminals that allow malicious content to gain privileges on the system. Simply using 'cat' to inspect a file on such a terminal could grant the malicious program your complete set of privileges.
So, at a minimum step, please at least use an unprivileged user account with no access to other data on the system. Using a tool such as AppArmor, SMACK, TOMOYO, SELinux, LIDS, to confine your tools to a small subset of system resources would be a good idea too. Virtualization could also work, but there have been plenty of 'breakouts' from those tools as well.

I have a 18MB MySQL table backup. How can I restore such a large SQL file?

I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.

How to convert a JET database to SQLite?

I'm a Linux user so an open-source, Linux-friendly solution would be preferable.
MDB Tools is a set of open source libraries and utilities to facilitate exporting data from MS Access databases (mdb files) without using the Microsoft DLLs. Thus non Windows OSs can read the data. Or, to put it another way, they are reverse engineering the layout of the MDB file.
Jackcess is a pure Java library for reading from and writing to MS Access databases. It is part of the OpenHMS project from Health Market Science, Inc. . It is not an application. There is no GUI. It's a library, intended for other developers to use to build Java applications.
ACCESSdb is a JavaScript library used to dynamically connect to and query locally available Microsoft Access database files within Internet Explorer.
Both Jackcess and ACCESSdb are much newer than MDB tools, are more active and has write support.
This is probably not the answer you want but the safest way to do this would be to get Visual Studio Express and read in the database using ODBC connector and then writing out the data using the ADO.NET Sqlite connector. I have found generally third party tools to talk to JET databases... JET waas aweful and never easily reverse engineered.
To complement Tony's answer with examples:
This is how I just did a conversion with MDB Tools to sqlite, in Ubuntu 16.04:
sudo apt install mdbtools
# define variables for easier copy/paste of the rest
in="my-jet4-file"
schema="$in-schema.sql"
out="$in.sqlite"
mdb-schema "$in" sqlite > "$schema"
sqlite3 "$out" < "$schema"
mdb-tables -1 "$in" \
| while read table; do \
mdb-export -I sqlite "$in" "$table" | sqlite3 "$out"; \
done
This uses Insert statements and is quite slow.
A faster alternative would be to export/import .csv files. I had used that sucessfully with Postgres:
#...
out="my_pg_db"
createdb "$out"
mdb-schema "$in" postgres > "$schema"
psql -U postgres -d "$out" -f "$schema"
mdb-tables -1 "$in" \
| while read table; do \
mdb-export -d'|' "$in" "$table" > "$table.csv"; \
psql -d "$out" -c "COPY \"$table\" FROM '$table.csv' DELIMITER '|' CSV HEADER"
done
Finally, there is also mdb-sqlite, which uses Jackcess and Java. After installing Java and ant:
cd mdb-sqlite-1.0.2
ant dist
java -jar dist/mdb-sqlite.jar "$in" "$out"

Resources