I have a sqlite database mydatabase.sqlite. I want to create a new database copy.sqlite with the same tables but without copying data. Some time ago I found a way to do that using .schema command and redirection, and it worked fine but now I can't remember that command anymore. I got crazy searching here the solution but didn't find it. Can you help me, please? Thx.
Something like:
sqlite3 mydatabase.sqlite ".schema --nosys" | sqlite3 copy.sqlite
should do it for many cases. If you have virtual tables with backing shadow tables (Like FTS ones), those might cause errors.
After several attempts and research I came up with this solution. It is not as simple as the one I found time ago but it seams work fine. This is a sequence of commands to type in the prompt of sqlite3. (The procedure I can't remember was just a line long, to type in directly in the windows prompt.)
sqlite> .open mydatabase.sqlite #open mydatabase.sqlite database
sqlite> .output mydatabase.sql #redirect output to file mydatabase.sql
sqlite> .schema # write schema of mydatabase.sqlite to file mydatabase.sql
sqlite> .open copy.sqlite #create new empty database
sqlite> .read mydatabase.sql #copy the structure of mydatabase.sqlite to copy.sqlite
The last command gives the following error:
Error: near line 2: object name reserved for internal use: sqlite_sequence
but the new database is created as well and it seams to work fine.
Related
I want to drop all of the databases except few ones.
Lets say there are 20 databases and I want to delete 18 out of them but keep 2 as it is the latest ones and are in use.
Please suggest.
First, execute the following query in the psql terminal.
select 'drop database "'||datname||'";'
from pg_database
where datistemplate=false;
This will generate drop database command for all the databases. Copy the result in a text editor and exclude(delete) what you want to keep and save it as dd.sql file. And execute it like this:
psql -d postgres -f dd.sql
From pgAdmin you can now select properties on a database, select DBs to drop and click delete/drop. Quick and easy! Drop selected databases:
As accepted answer kinda demonstrates it, dropping multiple databases was particularly tedious for me, so I wrote an helper script to alleviate this operation : https://github.com/Kraymer/ezdropdb
In short, you enter a pattern that the databases you want to suppress must match then all db names results are listed and there is a final prompt where you can enter which ones of those to drop (cf screenshot on project page) .
I have a short example on how to generate dbf files like
I saw the following link:
Data File Header Structure for the dBASE Version 7 Table File
I write my program with C #
For example, I want to produce the following table( to binary ):
Field Name Type MaxLength
-------------------------------------------
DSK_ID Character 100
DSK_ADRS Numeric 2
Are you trying to create the table within Foxpro (Visual Foxpro) itself?, DBase, or with a .net/java language. Your tabs are unclear as to what you are really getting into, and just creating the table via low-level is not the way to go.
I can modify this answer more, but suggest you edit your question to provide more detail.
The basic syntax, if using Visual FoxPro would be something like.
create table SomeTableName ( DSK_ID C(100), DSK_ADRS N(2,0) )
But again, would need more on the environment you plan on working with.
By knowing you want to do via C#, I would start by Downloading Microsoft's VFP OleDb provider.
Then, you can look at the many other links for connecting, querying (always parameterize) and execute what you need. Here is a short example to get a connection and create the table you want. Then it is up to you for querying, inserting, updating as needed.
OleDbConnection oConn = new OleDbConnection("Provider=VFPOLEDB.1;Data Source=C:\\SomePath");
OleDbCommand oCmd = new OleDbCommand();
oCmd.Connection = oConn;
oCmd.Connection.Open();
oCmd.CommandText = "create table SomeTableName ( DSK_ID C(100), DSK_ADRS N(2,0) )";
oCmd.ExecuteNonQuery();
oConn.Close();
Now, note, the "Connection" string has a Data Source. This should point to a PATH location where you WANT TO CREATE and/or QUERY the tables. You can have one connection that points to a folder that has 100+ tables and you can eventually query from any of them. But again, those are going to be other questions that you can find LOTS of answer to for sampling... for example, just search on
VFP OleDB C# and you will get plenty of hits
How are you going to handle memo files? Compound index files?
Just use the ODBC or Ole DB providers via COM InterOp and issue a CREATE TABLE.
I have one database with an image table that contains just over 37,000 records. Each record contains an image in the form of binary data. I need to get all of those 37,000 records into another database containing the same table and schema that has about 12,500 records. I need to insert these images into the database with an IF NOT EXISTS approach to make sure that there are no duplicates when I am done.
I tried exporting the data into excel and format it into a script. (I have doe this before with other tables.) The thing is, excel does not support binary data.
I also tried the "generate scripts" wizard in SSMS which did not work because the .sql file was well over 18GB and my PC could not handle it.
Is there some other SQL tool to be able to do this? I have Googled for hours but to no avail. Thanks for your help!
I have used SQL Workbench/J for this.
You can either use WbExport and WbImport through text files (the binary data will be written as separate files and the text file contains the filename).
Or you can use WbCopy to copy the data directly without intermediate files.
To achieve your "if not exists" approache you could use the update/insert mode, although that would change existing row.
I don't think there is a "insert only if it does not exist mode", but you should be able to achieve this by defining a unique index and ignore errors (although that wouldn't be really fast, but should be OK for that small number of rows).
If the "exists" check is more complicated, you could copy the data into a staging table in the target database, and then use SQL to merge that into the real table.
Why don't you try the 'Export data' feature? This should work.
Right click on the source database, select 'Tasks' and then 'Export data'. Then follow the instructions. You can also save the settings and execute the task on a regular basis.
Also, the bcp.exe utility could work to read data from one database and insert into another.
However, I would recommend using the first method.
Update: In order to avoid duplicates you have to be able to compare images. Unfortunately, you cannot compare images directly. But you could cast them to varbinary(max) for comparison.
So here's my advice:
1. Copy the table to the new database under the name tmp_images
2. use the merge command to insert new images only.
INSERT INTO DB1.dbo.table_name
SELECT * FROM DB2.dbo.table_name
WHERE column_name NOT IN
(
SELECT column_name FROM DB1.dbo.table_name
)
How do I dump single table data from a huge dump file into the database.
If I understand your question correctly - you already have a dump file of many tables and you only need the restore one table (right?).
I think the only way to do that is to actually restore the whole file into a new DB, then copy the data from the new DB to the existing one, OR dump only the table you just restored from the new DB using:
mysqldump -u username -p db_name table_name > dump.sql
And restore it again wherever you need it.
To make things a little quicker and save some disk, you can kill the first restore operation after the desired table was completely restored, so I hope the table name begins with one of the first letters of the alphabet :)
There are some suggestions on how you might do this in the following articles:
http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
http://blog.tsheets.com/2008/tips-tricks/extract-a-single-table-from-a-mysqldump-file.html
I found these by searching for "load single table from mysql database dump" on Google: http://www.google.com/search?q=load+single+table+from+mysql=database+dump
I know that I can import .csv file into a pre-existing table in a sqlite database through:
.import filename.csv tablename
However, is there such method/library that can automatically create the table (and its schema), so that I don't have to manually define: column1 = string, column2 = int ....etc.
Or, maybe we can import everything as string. To my limited understanding, sqlite3 seems to treat all fields as string anyway?
Edit:
The names of each column is not so important here (assume we can get that data from the first row in the CSV file, or they could be arbitrary names) The key is to identify the value types of each column.
This seems to work just fine for me (in sqlite3 version 3.8.4):
$ echo '.mode csv
> .import data_with_header.csv some_table' | sqlite3 db
It creates the table some_table with field names taken from the first row of the data_with_header.csv file. All fields are of type TEXT.
You said yourself in the comment that its a nontrivial problem to determine the types of columns. (Imagine a million rows that all look like numbers, but one of those rows has a Z in it. - Now that row has to be typed "string".)
Though non-trivial, it's also pretty easy to get the 90% scenario working. I would just write a little Python script to do this. Python has a very nice library for parsing CSV files and its interface to sqlite is simple enough.
Just load the CSV, guess and check at the column types. Devise a create table that encapsulates this information, then emit your insert intos. I can't imagine this taking up more than 20 lines of Python.
This is a little off-topic but it might help to use a tool that gives you all the SQL functionality on an individual csv file without actually using SQLite directly.
Take a look at TextQL - a utility that allows querying of csv files directly which uses SQLite engine in memory:
https://github.com/dinedal/textql
textql -header -sql "select * from tbl" -source some_file.csv