Import CSV into SQL Server database, keeping ID column values - sql-server

I am working to migrate a SQLite database to SQL Server and I need to use IntelliJ IDEA to import all the data from the SQLite tables in to the MSSQL database.
I have exported the data to CSV format, but when I import into SQL Server, I need to maintain the existing ID columns (as foreign keys refer to it).
Normally, I can do this by executing SET IDENTITY_INSERT xxx ON; prior to my INSERT statements.
However, I do not know how to do this when importing CSV using IntelliJ.
The only other option I see is to export the data as a series of SQL INSERT statements, but that is very time consuming as the schemas between the two databases are slightly different (not to mention the SQL syntax).
Is there another way to import this data?

I don't know how to perform an Identity Insert ON in an IntelliJ query, but I do know how to work around this problem. Import your data into a temporary table destination, then execute a query within SQL Server that
Sets Identity Insert ON
Inserts the data from the temporary table into the final destination
Sets Identity Insert OFF
What this really does is prevent you from having to spend (potentially) hours finding out how to implement an Identity Insert ON in IntelliJ when you may never need to do this again. It is straightforward and simple to code as well.
However, if you want to learn if there is a way to do this in IntelliJ, go for it. That would be a more optimal method.

Related

Insert data into SQL tables manually using the related columns in Management Studio

I am trying to insert data into some related table in SQL Server 2008R2 and I am trying to figure out whether there is an easier way to insert data manually (visually) using the related columns and not the IDs. If you check the two snapshots of the tables and table WFUserGroup basically I am trying to see if I can have a bound query (like in MS ACCESS) where I can see the Name column instead of the ID and the name of the Group instead of the group_id
I know that with a TRANSACTION block and INSERT INTO statements I can create a new user in WFUser table and then relate it to a group in the WFUserGroup table, but I am telling myself there should be an easier way. Anyone knows a workaround?
Tables:
Using Edit Top 200 Rows Feature:
You could use a flat file with data in a .csv or excel and use the Import feature in SQL server.
how to navigate, right click on the database and tasks--> Import then the wizard to select the necessary file and tables.
I do see that there are primary key and foreign keys so you have to make sure that its considered in your files you are going to import.

preserve the data while dropping a hive internal table

I have loaded a huge table from SQL Server onto Hive. The mistake I made is I created the table as a Internal table in HIVE. Can anyone suggest any hack so that I can alter the table structure , without dropping the data.
The data is huge and I cant afford to export the data out of source again.
The problem right now, is that since the column orders don't match the SQL server table, a lot of columns display NULL.
Any help will be highly appreciated.
I do not see any problem to use an Alter Table on a internal table. (https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Column)
Another - but not recommended - option would be to open your hive metastore(HCatalog) and apply the changes there. Hive reads out the schema information from a relational database (configured during the Hadoop setup, default is MySQL). In this MySQL you can try to change some settings. However, this is not recommended as with a mistake, you can screw your whole Hive databases.
The safest way is creating a new table and using the existing as a source
create table new_table
as
select
[...]
from existing_table

Importing data into Oracle via Web Enterprise Manager with unique constraints

I am not at all familiar with Oracle so bear with me!
I am using version Oracle 10G with the web front end called Enterprise Manager. I have been given some CSV files to import however when I use the Load Data from User Files option I think I can set everything up but when the job runs it complains that there are unique constraints, I guess because there is duplicate data trying to be inserted.
How can I get the insert to create a new primary key similar to a MSSQL auto inc number?
Oracle does not have an analog to the MSSQL auto incrementing field. The feature has to be simulated via triggers and Oracle sequences. Some options here are to:
create a trigger to populate the columns you want auto incremented from a sequence
delete the offending duplicate keys in the table
change the values in your CSV file.
You might look at this related SO question.
There is no autoinc type in Oracle. You have to use a sequence.
By using a before insert trigger, you could get something similar to what you get by using an autoinc in SQL Server.
You can see here how to do it.

MaxDB Data and Schema Export to SQL Server 2005/8

I am tasked with exporting the data contained inside a MaxDB database to SQL Server 200x. I was wondering if anyone has gone through this before and what your process was.
Here is my idea but its not automated.
1) Export data from MaxDB for each table as a CSV.
2) Clean the CSV to remove ? (which it uses for nulls) and fix the date strings.
3) Use SSIS to import the data into tables in SQL Server.
I was wondering if anyone has tried linking MaxDB to SQL Server or what other suggestions or ideas you have for automating this.
Thanks.
AboutDev.
I managed to find a solution to this. There is an open source MaxDB library that will allow you to connect to it through .Net much like the SQL provider. You can use that to get schema information and data, then write a little code to generate scripts to run in SQL Server to create tables and insert the data.
MaxDb Data Provider for ADO.NET
If this is a one time thing, you don't have to have it all automated.
I'd pull the CSVs into SQL Server tables, and keep them forever, will help with any questions a year from now. You can prefix them all the same, "Conversion_" or whatever. There are no constraints or FKs on these tables. You might consider using varchar for every column (or the ones that cause problems, or not at all if the data is clean), just to be sure there are no data type conversion issues.
pull the data from these conversion tables into the proper final tables. I'd use a single conversion stored procedure to do everything (but I like tsql). If the data isn't that large millions and millions of rows or less, just loop through and build out all the tables, printing log info as necessary, or inserting into exception/bad data tables as necessary.

How to import to SQL Server 2005 from flat file with data transformations

I have a flat data file that I need to import into my SQL Server 2005 DB.
Many of the fields need to be split off into different, related tables. For example, the flat file has names, addresses and telephone numbers, all in one record. In my DB, the Person table has many Telephones and Addresses.
Is there a one-step process whereby I can import everything into my tables, or do I have to first import it into a new table in my DB (ugh - pollution if I forget to delete it), and import the data from there using SQL statements and temp tables?
I prefer the one import table, followed by splitting out into final tables.
I'd also persist the import table rather than creating/deleting it every time.
Easier to deal wth constraints (check before insert into final table or update existing row)
Easier to leave error generating data in the import table after removing successful rows
Server side transaction
Data type safety: can you 100% trust your source?
Easier to ISNULL or NULLIF in SQL to deal with empty strings and other such
and other things that I can't recall right now...
This is totally a job for SQL Server Integration Services. It has some great functionality that will allow you to grab a flat file, do data manipulation on it, and eventually import it into your new db.
Unfortunately, there isn't an easy "quick fix" solution that I know of outside of that. There is the technology I would look into first, however.

Resources