I have 2 SQLite databases with common data but with different purposes and I wanted to avoid reinserting data, so I was wondering if it was possible to copy a whole table from one database to another?
You'll have to attach Database X with Database Y using the ATTACH command, then run the appropriate Insert Into commands for the tables you want to transfer.
INSERT INTO X.TABLE SELECT * FROM Y.TABLE;
// "INSERT or IGNORE" if you want to ignore duplicates with same unique constraint
Or, if the columns are not matched up in order:
INSERT INTO X.TABLE(fieldname1, fieldname2) SELECT fieldname1, fieldname2 FROM Y.TABLE;
Easiest and correct way on a single line:
sqlite3 old.db ".dump mytable" | sqlite3 new.db
The primary key and the columns types will be kept.
Consider a example where I have two databases namely allmsa.db and atlanta.db. Say the database allmsa.db has tables for all msas in US and database atlanta.db is empty.
Our target is to copy the table atlanta from allmsa.db to atlanta.db.
Steps
sqlite3 atlanta.db(to go into atlanta database)
Attach allmsa.db. This can be done using the command ATTACH '/mnt/fastaccessDS/core/csv/allmsa.db' AS AM;
note that we give the entire path of the database to be attached.
check the database list using sqlite> .databases
you can see the output as
seq name file
--- --------------- ----------------------------------------------------------
0 main /mnt/fastaccessDS/core/csv/atlanta.db
2 AM /mnt/fastaccessDS/core/csv/allmsa.db
now you come to your actual target. Use the command
INSERT INTO atlanta SELECT * FROM AM.atlanta;
This should serve your purpose.
For one time action, you can use .dump and .read.
Dump the table my_table from old_db.sqlite
c:\sqlite>sqlite3.exe old_db.sqlite
sqlite> .output mytable_dump.sql
sqlite> .dump my_table
sqlite> .quit
Read the dump into the new_db.sqlite assuming the table there does not exist
c:\sqlite>sqlite3.exe new_db.sqlite
sqlite> .read mytable_dump.sql
Now you have cloned your table.
To do this for whole database, simply leave out the table name in the .dump command.
Bonus: The databases can have different encodings.
Objective-C code for copy Table from a Database to another Database
-(void) createCopyDatabase{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory , NSUserDomainMask, YES);
NSString *documentsDir = [paths objectAtIndex:0];
NSString *maindbPath = [documentsDir stringByAppendingPathComponent:#"User.sqlite"];;
NSString *newdbPath = [documentsDir stringByAppendingPathComponent:#"User_copy.sqlite"];
NSFileManager *fileManager = [NSFileManager defaultManager];
char *error;
if ([fileManager fileExistsAtPath:newdbPath]) {
[fileManager removeItemAtPath:newdbPath error:nil];
}
sqlite3 *database;
//open database
if (sqlite3_open([newdbPath UTF8String], &database)!=SQLITE_OK) {
NSLog(#"Error to open database");
}
NSString *attachQuery = [NSString stringWithFormat:#"ATTACH DATABASE \"%#\" AS aDB",maindbPath];
sqlite3_exec(database, [attachQuery UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to Attach = %s",error);
}
//Query for copy Table
NSString *sqlString = #"CREATE TABLE Info AS SELECT * FROM aDB.Info";
sqlite3_exec(database, [sqlString UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to copy database = %s",error);
}
//Query for copy Table with Where Clause
sqlString = #"CREATE TABLE comments AS SELECT * FROM aDB.comments Where user_name = 'XYZ'";
sqlite3_exec(database, [sqlString UTF8String], NULL, NULL, &error);
if (error) {
NSLog(#"Error to copy database = %s",error);
}
}
The Easiest way to do is through SQLite Studio
If you don't have download from https://download.cnet.com/SQLiteStudio/3000-10254_4-75836135.html
Steps:
1.Add both the databases.
2.Click View tab and then databases as shown in the picture.
3.Right click the table you want to copy and copy it.
Paste the table after right clicking the database where you want to paste.
Now you're done
First scenario: DB1.sqlite and DB2.sqlite have the same table(t1), but DB1 is more "up to date" than DB2. If it's small, drop the table from DB2 and recreate it with the data:
> DROP TABLE IF EXISTS db2.t1; CREATE TABLE db2.t1 AS SELECT * FROM db1.t1;
Second scenario: If it's a large table, you may be better off with an INSERT if not exists type solution. If you have a Unique Key column it's more straight forward, otherwise you'd need to use a combination of fields (maybe every field) and at some point it's still faster to just drop and re-create the table; it's always more straight forward (less thinking required).
THE SETUP: open SQLite without a DB which creates a temporary in memory main database, then attach DB1.sqlite and DB2.sqlite
> sqlite3
sqlite> ATTACH "DB1.sqlite" AS db1
sqlite> ATTACH "DB2.sqlite" AS db2
and use .databases to see the attached databases and their files.
sqlite> .databases
main:
db1: /db/DB1.sqlite
db2: /db/DB2.sqlite
I needed to move data from a sql server compact database to sqlite, so using sql server 2008 you can right click on the table and select 'Script Table To' and then 'Data to Inserts'. Copy the insert statements remove the 'GO' statements and it executed successfully when applied to the sqlite database using the 'DB Browser for Sqlite' app.
If you use DB Browser for SQLite, you can copy the table from one db to another in following steps:
Open two instances of the app and load the source db and target db side by side.
If the target db does not have the table, "Copy Create Statement" from the source db and then paste the sql statement in "Execute SQL" tab and run the sql to create the table.
In the source db, export the table as a CSV file.
In the target db, import the CSV file to the table with the same table name. The app will ask you do you want to import the data to the existing table, click yes. Done.
Related
I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated
I am new to database. I am trying to create a database and table in it.
but unable to save and open again after exiting from sqlite.
I am using sqlite3 3.6.20 on centOS, when i will enter following command
.save ex1.db or .open ex1.db
it will print following error message.
Error: unknown command or invalid arguments: "save". Enter ".help" for help
Error: unknown command or invalid arguments: "open". Enter ".help" for help
and when Print .help
it wont show any command related to save and open existing database.
thanks in advance.
I am trying to create a database and table in it. but unable to save and open again after exiting from sqlite.
You don't need to save. Each transaction writes to disk. (More or less.)
To create the database "test.sl3", you can do this. (From the command line. Programs work about the same way.)
$ sqlite3 test.sl3
SQLite version 3.8.7.2 2014-11-18 20:57:56
Enter ".help" for usage hints.
sqlite> create table test (test_id integer primary key);
sqlite> insert into test values (1);
sqlite> select * from test;
1
.quit
No .save. Now load the database again.
$ sqlite3 test.sl3
SQLite version 3.8.7.2 2014-11-18 20:57:56
Enter ".help" for usage hints.
sqlite> select * from test;
1
The data is still there.
You're supposed to provide a filename as an argument for the .save command, e.g.:
sqlite> .save ex1.db
docs: http://www.sqlite.org/cli.html
as Mike pointed out in his answer, you should provide a file name to put the database in.
If you did a lot of work and you did not provide a file name up front and you work in a version in which the .save command is not yet available (you quote that sqlite3 3.6.20 does not know it and I also do not see it in sqlite3 version 3.8.2) you can use the .backup command
sqlite> .help
[...]
.backup ?DB? FILE Backup DB (default "main") to FILE
$ sqlite3
[...]
sqlite> create table mytable ( column1 text, column2 integer );
sqlite> insert into mytable values ( 'ENTRY1', 1 );
sqlite> insert into mytable values ( 'ENTRY2', 2 );
sqlite> .backup main temp.db
sqlite> .quit
$ sqlite3 temp.db
[...]
sqlite> .schema
CREATE TABLE mytable ( column1 text, column2 integer );
sqlite> select * from mytable;
column1 column2
---------- ----------
ENTRY1 1
ENTRY2 2
Use Sqlite3 ex1.db to open your database. After that, all queries will take effect in your DB.
Maybe try using an absolute path instead of a relative path.
I am in VS Code using the SQLite extension by alexcvzz.
When I use a relative path, I get an error.
CREATE TABLE test (id INTEGER, name TEXT);
INSERT INTO test (id, name) VALUES (1, "Hello");
.save ex1.db
When I use an absolute path, it works.
CREATE TABLE test (id INTEGER, name TEXT);
INSERT INTO test (id, name) VALUES (1, "Hello");
.save /Users/zacharyargentin/databases/ex1.db
Note: In the VS Code extension you have to choose a database before you run the query, so I chose the :memory: database, which is the default in-memory database. This database deletes itself as soon as you close the connection (so if you want to keep it, you have to save it like I did in the example above).
There a concept cursor in python to manipulate database,such as:
cu = cx.cursor()
cu.execute('create table catalog (id integer primary key,pid integer,name varchar(10) UNIQUE)')
Is there a same kind of way to manipulate database in R?
take a look at the package sqldf .
It has an excellent documentation here : https://code.google.com/p/sqldf/
library(sqldf)
# create new empty database called mydb
sqldf("attach 'mydb' as new")
sqldf("create table catalog (id integer not null, pid integer,name text(10) )", dbname = "mydb")
#close
sqldf()
P.S: Perhaps you should explain a little bit what you want to do. If you are already "connected" to a database, then specify which engine (because of different SQL statements in creating tables etc ....)
I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.
I am new to PowerBuilder.
I want to retrieve the data from MSAccess tables and update it to corresponding SQL tables. I am not able to create a permanent DSN for MSAccess because I have to select different MSAccess files with same table information. I can create a permanent DSN for SQL server.
Please help me to create DSN dynamically when selecting the MSAccess file and push all the tables data to SQL using PowerBuilder.
Also give the full PowerBuilder code to complete the problem if its possible.
In Access we strongly suggest not using DSNs at all as it is one less thing for someone to have to configure and one less thing for the users to screw up. Using DSN-Less Connections You should see if PowerBuilder has a similar option.
Create the DSN manually in the ODBC administrator
Locate the entry in the registry
Export the registry syntax into a .reg file
Read and edit the .reg file dynamically in PB
Write it back to the registry using PB's RegistrySet ( key, valuename, valuetype, value )
Once you've got your DSN set up, there are many options to push data from one database to the other.
You'll need two transaction objects in PB, each pointing to its own database. Then, you could use a Data Pipeline object to manage the actual data transfer.
You want to do the DSNLess connection referenced by Tony. I show an example of doing it at PBDJ and have a code sample over at Sybase's CodeXchange.
I am using this code, try it!
//// Profile access databases accdb format
SQLCA.DBMS = "OLE DB"
SQLCA.AutoCommit = False
SQLCA.DBParm = "PROVIDER='Microsoft.ACE.OLEDB.12.0',DATASOURCE='C:\databasename.accdb',DelimitIdentifier='No',CommitOnDisconnect='No'"
Connect using SQLCA;
If SQLCA.SQLCode = 0 Then
Open ( w_rsre_frame )
else
MessageBox ("Cannot Connect to Database", SQLCA.SQLErrText )
End If
or
//// Profile access databases mdb format
transaction aTrx
long resu
string database
database = "C:\databasename.mdb"
aTrx = create transaction
aTrx.DBMS = "OLE DB"
aTrx.AutoCommit = True
aTrx.DBParm = "PROVIDER='Microsoft.Jet.OLEDB.4.0',DATASOURCE='"+database+"',PBMaxBlobSize=100000,StaticBind='No',PBNoCatalog='YES'"
connect using aTrx ;
if atrx.sqldbcode = 0 then
messagebox("","Connection success to database")
else
messagebox("Error code: "+string(atrx.sqlcode),atrx.sqlerrtext+ " DB Code Error: "+string(atrx.sqldbcode))
end if
// do stuff...
destroy atrx