Most efficient way to move a few SQL Server tables to SQLite? - sql-server

I have a fairly large SQL Server database; I'd like to pull 4 tables out and dump them directly into an sqlite.db for remote querying (via nightly batch).
I was about to write a script to step through(most likely on a unix host kicked off via cron); but there should be a simpler method to export the tables directly (SQLite not an option in the included DTS Import/Export wizard)
What would the most efficient method of dumping the SQL Server tables to SQLite via batch be?

You could export your data from ms-sql with sqlcmd to a text file, and later import this with a bulk import in sqlite. Read this question and answers to get an idea how to do this in sqlite.
You could create a batch file and run this with cron, I guess.

If you were considering DTS, then you might be able to do so via ODBC. MSSQL -> ODBC -> Sqlite
http://www.ch-werner.de/sqliteodbc/

Related

Perform SQLDump on SQL Server

Is there a way to perform a SQL Dump in a SQL Server? Because on mysql we can easily do that with mysqldump command but on a SQL Server when I try to export the data it seems that it's only allowing to export per table and not per database. Now when I try to do the copy database it tries to copy the source db to a destination db. Now I came from MYSQL and I thought that it will somehow behave the same way.
Any idea on how to achieve this?
Thanks for those who will reply
In SQL Server you would either use Backup and Restore, or extract all the schema and data into a .bacpac file.

Bulk Insert(BCP) into SQL server VS Sqoop Export into Sql Server

Which one is better option among following options in-terms of speed and performance for the purpose of exporting data from hive/hdfs to sql server.
1) Using Sqoop Export facility to connect to RDBMS (SQL server) and export data directly.
2) Dump CSV file using HIVE using INSERT OVERWRITE LOCAL DIRECTORY command and then perform BCP ( or Bulk Insert Query) on those CSV files to put the data into SQL server database.
Or,
Is there any other better option?
In my experience, I use bcp whenever I can. It's from what I can tell the fastest way to shotgun data into a database and is configurable on a (somewhat) fine grain level.
Couple things to consider:
Use a staging table. No primary key, no indexes, just raw data.
Have a "consolidation" proc to move data around after loading.
Use a row size of about 5000 to start, but if performance is of utmost concern, then test.
Make sure you increase your timeout.

Sqoop Export into Sql Server VS Bulk Insert into SQL server

I have a unique query regarding Apache Sqoop. I have imported data using apache Sqoop import facility into my HDFS files.
Next ,. I need to put the data back into another database (basically I am performing data transfer from one database vendor to another database vendor) using Hadoop (Sqoop).
To Put data into Sql Server , there are 2 options.
1) Using Sqoop Export facility to connect to my RDBMS,(SQL server) and export data directly.
2) Copy the HDFS data files (which are in CSV format) into my local machine using copyToLocal command and then perform BCP ( or Bulk Insert Query) on those CSV files to put the data into SQL server database.
I would like to understand which is the perfect(or rather correct) approach to do so and which one of them is more Faster out of the two - The Bulk Insert or Apache Sqoop Export from HDFS into RDBMS. ??
Are there any other ways apart from these 2 ways mentioned above which can transfer faster from one database vendor to another.?
I am using 6-7 mappers (records to be transferred is around 20-25 millions)
Please suggest and Kindly let me know if my Question is unclear.
Thanks in Advance.
If all you do is ETL from one vendor to another, then going through Sqoop/HDFS is a poor choice. Sqoop makes perfect sense if the data originates in HDFS or is meant to stay in HDFS. I would also consider sqoop if the set is so large as to warrant a large cluster for the transformation stage. But a mere 25 million records is not worth it.
With SQL Server import it is imperative, on large imports, to achieve minimally logging, which require bulk insert. Although 25 mil is not so large as to make the bulk option imperative, still AFAIK sqoop, nor sqoop2, do not support bulk insert for SQL Server yet.
I recommend SSIS instead. Is much more mature than sqoop, it has bulk insert task and has a rich transformation featureset. Your small import is well within the size SSIS can handle.

How to copy .mdb file to .mdf file

I want to copy the ms access database(.mdb) to sql server database(.mdf). I did it with sql server Import and export data service. But I want it copy the data regularly or a specific time. Is it possible or not. I have tried to create a batch file
copy /y "E:\Dinesh Work\for-reports.mdb" "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\for-reports.mdf"
but it gives the following error:
The process cannot access the file because it is being used by another process.
I have same tables in my sql server database as msaccess database. Is there any solution with batch file or something else.
Any suggestion will be appreciated.
Thanks in Advance.
This sounds like you want to use the Access database to edit the data in the SQL Server database.
Is that correct?
If yes, do you really need to copy the data?
You could also link from Access to the SQL Server tables.
This way, you have tables in Access that look like normal local Access tables, but they are really just links to SQL Server tables. You can edit data in these tables in Access, but you are actually writing directly into the SQL Server database.
Here are some examples how to set this up:
Link to SQL Server data
Access to SQL Server: Linking Tables
MDB and MDF files are wildly different types. You can't just copy them.
You might try setting up an SSIS task to do a regular data transfer - something like ETL if you're familiar with that term.
EDIT: The reason you're seeing the file locked error is because SQL Server maintains that lock on the MDF file while the database is running. In order to move or copy it you need to take that particular database offline.
As #Yuck said, you can not just copy the file and rename it, you need something like ETL or just a tool to export data.
I did xportdsl to copy from a h2database to a mssql and mssql to oracle
http://code.google.com/p/xportdsl/
I used gorm and a hacky dsl that worked and it is still working
You can script a bat file to execute something like this java -jar xportdsl.jar test001.txt

Equivalent of Oracle export for SQL Server and/or db2

Can SQL Server or db2 do entire database exports like oracle (using exp command)?
I've searched the internets and found bcp for SQL Server. But it seems I would have to iterate over all the tables to get what I want.
For db2 it looks to be roughly the same. Is there something I'm missing? Anyone have any suggestions and/or any opinions? Thanks ahead of time.
This is for SQL SERVER
Backup & Restore
To take an entire database with SQL Server, you can do a BACKUP and RESTORE
BACKUP: http://msdn.microsoft.com/en-us/library/ms186865.aspx
RESTORE: http://msdn.microsoft.com/en-us/library/ms186858.aspx
Export and Import
You can right click a database in SQL Server Management Studio, and under TASKS, click on EXPORT DATA. Follow the Wizard to choose the objects you want to export and put them into the appropriate location.
Custom SSIS for Raw format
Build a SSIS package that will read data from source table and put it into a RAW file on disk for later use. Raw files holds the structure of the table and the data.
DB2 for Linux, UNIX, and Windows has a utility called db2move, which generates the DDL to rebuild the database from scratch, and iterates through all the tables to dump their contents to flatfiles via the EXPORT command.

Resources