How to create identity column when importing data from Excel into MS SQL Server (with Import and Export Wizard)? - sql-server

I need to import great amount of data from excel into MS SQL Server, using the Import/ Export Wizard. Then I'll continue importing more data into the same table on a weekly basis.
The bad thing is that my excel data doesn't have an identity column, which to use as a primary key. The only option with what available is to use 2 string columns as a primary key, which is not a good idea.
Is there a way the sql server to add auto-identity column (integer) when importing the data, and what's the trick? I prefer such a column to be automatically added, because I'll need to import big amount of data into the same table on a weekly basis.
I tested a couple of times (with no success) and looked for a solution in the internet, but didn't found an answer to that particular question. Thanks in advance!

You can create the table first along with the new identity column.
CREATE TABLE YourTable
(
id INT IDENTITY,
col1....
col2....
col3....
PRIMARY KEY(id)
)
Then run the import/export wizard. When you get to the destination section pick your newly created table and map all the fields except the identity column. After the import you can check the table and see the id column has been populated.

Column names in Excel sheet should be same as that of sql Table.
Map Excel columns with that of SQL table columns by Clicking Edit Mapping.
Just don't map that (identity) column of sql table to anything .
In Import-Export Wizard don't check Enable identity insert Checkbox. (Leave that
un selected).
and go ahead import .. This worked for me. !!!!!
Previously when i used to check Enable identity insert it used to give me error.

I had a similar issue. I have a SQL table with an identity column (auto increment ID value) defined. I needed to import an Excel spreadsheet into this table.
I finally got it to work via the following:
Do NOT add a column to the Excel spreadsheet for your identity column in the SQL table.
When you run the import wizard and get to the Edit Mappings step, do NOT select the Enable identity insert checkbox. This, in particular, was tripping me up.

Related

How can I import Excel data into an existing table in SQL Server and keep primary key incrementing accordingly

I am using the SQL Server Import and Export wizard to import data from an Excel sheet into an existing table in my database. I want to append the data to the existing data, however I am stuck on figuring out how to keep the primary key in this table incrementing with the data I import. I don't have the [ID] column in my Excel sheet. The last value in my [ID] column is 105, so I would like for it to continue from there once I import.
Is there any way to do this through the wizard? I've tried the "Enable Identity Insert" check box in the column mappings window, however I still receive an error that I can not insert null values
On which column is the NULL error? If it's the ID column, you should just ignore it as long as it's an identity column when the rest of the data is imported it should automatically increment. If not you can undo and CHECKIDENT to change the ID it'll start at.
It should be simple. Alter your ID column as identity column increment by 1 and while importing data from your excel, don't map or do anything to this column. Just import all other columns without Enabling Identity Insert check box

How to do incremental load in SQL server

I have DB tables where there are no identity column. We have client data fetched from DB2 to SQL Server and unfortunately DB2 design doesn't have identity columns.
Now we have some data inserted, updated and deleted from source (DB2/SQL Server) and these data I want to load to destination (SQL Server) using some incremental load concept.
I tried SSIS lookups in Dataflow task however it's taking huge time to simply insert one new record. Please note that, in "lookup transformation editor" I'm mapping all "available input columns" to available "available lookup columns " as there is no identity column. I think, this is why it's taking time. I have few tables having around 20 million records.
Is there any faster method /ways available to do this, specially when table does not have identity column? Is except or SQL merge will help?
I'm open to have any other approach other than SSIS.
Look up is SSIS takes some time, so you can use ESQL Task and call the merge procedures.
I think what you can do is use merge procedures there you can create a column in your source table and update the records in the column like
merge desination
using
{
source columns from source s}
join desination d
on s.primarykey=d.primary key
when matched then
s.updatedrecord=1
when not matched then
insert into desination columns.
from the above the query you new records will be inserted and the updated records with the help of updatedrecord column you can update or insert them in your destination table successfully.
you can go to the following link for merge procedures.
https://www.sqlservercentral.com/Forums/Topic1042053-392-1.aspx
https://msdn.microsoft.com/en-us/library/bb510625.aspx
If your source is a SQL query from DB2 for instance, try adding a new column to this. It will be a checksum value over the columns you select "expect to change or want to monitor changes over".
SELECT
BINARY_CHECKSUM(
Column1
,Column2
,Column3)AS ChecksumValue
,Column1
,Column2
,Column3
FROM #TEMP
You would have to add this to your existing table in SQL as well to be able to start comparing.
If you have this, then you can do the lookup on the checksum value rater than on the columns. Since number lookups are a lot quicker than varchar comparisons over multiple columns. I am guessing since there is no key, you would then have to split the data between checksum matches (which should be no change existing records) and non matches. The non matches Could be new rows or just updates. But your set should be smaller to work with.
Good luck. HTH

SQL Server 2016 CSV file import

I have a table dump in CSV format. Using SQL Server Management Studio I can import that CSV file, but the problem is I can not set a column as primary key.
My CSV file/ table has column and I want to set PARETNT_NAME as primary while importing.
PARENT_NAME | QUANTITY | COMPONENT_NAME
Please guide me how to do this.
Thanks in advance
During the import there is a step where you pick the table to import to. As part of that you can manually define the SQL CREATE TABLE statement. I would assume you could modify that to specify the primary key. But I haven't ever tried to do this.
However, unless you have a specific reason to do this before the import, I would just use an ALTER TABLE statement to modify the column to be not NULL and set it as the primary key. To me that just seems easier.

How to import to a SQL Server table and start primary key where left off

I can't seem to find an answer to this. I am very new to SQL Server. I have been trying to set up a database to be updated daily for a website.
There is a .CSV file produced daily. I have set up a script to copy the file, edit the text and import the file into a table in SQL Server 2012.
There are 16 fields in the .CSV file. I have a 17th field in the table I import it into.
The 17th field is the Primary Key which I have set to autoincrement.
My problem is this:
I'm implementing this as a new process. This is already set up and in operation on an older server. The older server was using MySql. The Primary Key was left off at 81,720,024.
I have set the Primary Key field to autoincrement with a seed of 81720024.
Every time I update the table I truncate the table first and the import from a staging table. The Primary Key always starts at 81720024. I need to have it increment from the last entry it had. Please help!
Try deleting from the table instead of truncating.

How to import Case-Sensitive data from Oracle to SQL Server using SSIS

I am trying to import data from Oracle into SQL Server using SSIS.
The problem is I have a PK of datatype VARCHAR2(200) in one of the tables having case-sensitive data in Oracle DB. Hence, the SSIS, while importing the data, is throwing
Violation of PK, cannot insert duplicate value in PK
How should I work around this? Any solution for this EXCEPT accepted answer of this because it's not feasible for me to drop and create the DB for enabling case-sensitive data?
You don't need to Recreate database. You just need to set Case Sensitive column.
Open in Design mode Table, choose your column and push Collation row.
Just check "Case Sensitive" checkbox, push OK and Save Table. now It will be OK.
If you can add a new column, set its collation to case sensitive one, reload the records and then rename them accordingly:
SELECT 1 AS TEST INTO #TT
ALTER TABLE #TT ADD new_pk_case_sensitive VARCHAR(200) COLLATE Latin1_General_CS_AS

Resources