I'm importing a bacpac from Azure to a local SQL DB. The process goes for a while and on about the 50th table it fails with this error:
IDENTITY_INSERT is already on for table 'X'.
Cannot perform set operation for table 'Y'
Table 'X' was successfully processed already (it was like table #45 in the list).
Table 'Y' is the one currently processing (like table #50 in the list).
After the error/failure I see rows on table Y so it seems at one point IDENTITY_INSERT was ON for that table. Not sure what else to check.
As a workaround use the Import/Export Wizard instead. After selecting the source and destination tables, click on edit mappings. In the resultant pop-up, click on edit SQL and edit the auto-generated SQL and add IDENTITY (1,1) at the end of the column you want to set IDENTITY property. Ensure that you have enabled IDENTITY INSERT ON by checking the relevant box.
Related
I try to add a column to a table with GUI Tableplus, but no response for long time.
So I turn to the db server, but got these error:
Maybe some inconsistent data generated during the operation through the Tableplus.
I am new to postgresql , and don't know what to do next.
-----updated------
I did some operation as #Dri372 told, and got some progress.
The failed reason for table sys_role and s2 is that the tables are not empty, they have some records.
If I run sql like this create table s3 AS SELECT * FROM sys_role; alter table s3 add column project_code varchar(50);, I successed.
Now how could I still work on the table sys_role?
I am trying to use the import and export wizard to move a small data set from a CSV file to an existing (empty) table. I did Script Table As > Create To, to get all DML for this table. I know the field type of the two fields which are causing problems is varchar(50). I'm getting this error message:
Error 0xc020902a: Data Flow Task 1: The "Source - Reconciliation_dbo_agg_boc_consolidated_csv.Outputs[Flat File Source Output].Columns["ReportScope"]" failed because truncation occurred, and the truncation row disposition on "Source - Reconciliation_dbo_agg_boc_consolidated_csv.Outputs[Flat File Source Output].Columns["ReportScope"]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
(SQL Server Import and Export Wizard)
The max length of all characters is 49, so I'm not sure why SQL Server is complaining about truncation. Is there any way to disable this error check and just force it to work? It should work as-is! Thanks everyone.
Is there any way to disable this error check and just force it to
work? It should work as-is! Thanks everyone.
Yes. If you're using the wizard, you can view the table schema before running it, and check the option to ignore truncation.
The max length of all characters is 49, so I'm not sure why SQL Server
is complaining about truncation.
The default datatype of source column may be Text while using import wizard, so change it to varchar(50) using advanced tab of source. Check this link for more details.
For the safe side can you please check column data type in both Source and Destination. If both are not same just declare all your columns as varchar inside table with some maximum length say for example varchar(max) or varchar(500) and see what would be the result.
Change max length of Varchar column:
ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500);
Then the column will default to allowing nulls even if it was originally defined as NOT NULL. i.e. omitting the specification in an ALTER TABLE ... ALTER COLUMN is always treated as.
ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500) NULL;
check column nullable or Not nullable based on requirement just change it.
Use below steps for better understanding How to import a CSV file into a database using SQL Server Management Studio:
While bulk copy and other bulk import options are not available on the SQL servers, you can import a CSV formatted file into your database using SQL Server Management Studio.
First, create a table in your database into which you will import the CSV file. After the table is created:
Log in to your database using SQL Server Management Studio.
Right click the database and select Tasks -> Import Data...
Click the Next > button.
For Data Source, select Flat File Source. Then use the Browse button to select the CSV file. Spend some time configuring the data import before clicking the Next > button.
For Destination, select the correct database provider (e.g. for SQL Server 2012, you can use SQL Server Native Client 11.0). Enter the Server name; check Use SQL Server Authentication, enter the User name, Password, and Database before clicking the Next > button.
In the Select Source Tables and Views window, you can Edit Mappings before clicking the Next > button.
Check Run immediately and click the Next > button.
Click the Finish button to run the package.
I am trying to import many tables from access db to MS SQL server using the import wizard.
Some rows in source tables has been deleted so the sequence of IDs are like this: 2,3,5,8,9,12,...
but when I import the data into my destination, the IDs start from 1 and increment by 1, so they don't exactly match with source data.
I even check the "Enable Identity insert" but it does not help.
The only work around I have found is to change the IDs in destination tables from Identity to integer one by one, then import, and then change them back to identity, which is very time consuming.
Is there any better way to do this?
If you want to insert an id in the identity column, you need to use:
SET IDENTITY_INSERT table_name ON
https://msdn.microsoft.com/es-us/library/ms188059.aspx
Remember to set it OFF at the end of the script.
We have a database a that is replicated to a subscriber db b (used for SSRS reporting) every night at 2.45 AM.
We need to add a column to one of the replicated tables since it's source file in our iSeries is having a column added that we need to use in our SSRS reporting db.
I understand (from Making Schema Changes on Publication Databases) and the answer here from Damien_The_Unbeliever) that there is a default setting in SQL Server Replication whereby if we use a T-SQL ALTER TABLE DDL statement to add the new column to our table BUPF in the PION database, the change will automatically propagate to the subscriber db.
How can I check the replication of schema changes setting to ensure that we will have no issues with the replication following making the change?
Or should I just run ALTER TABLE BUPF ADD Column BUPCAT Char(5) NULL?
To add a new column to a table and include it in an existing publication, you'll need to use ALTER TABLE < Table > ADD < Column > syntax at the publisher. By default the schema change will be propagated to subscribers, publication property #replicate_ddl must be set to true.
You can verify if #replicate_ddl is set to true by executing sp_helppublication and inspecting the #replicate_ddl value. Likewise, you can set #replicate_ddl to true by using sp_changepublication.
See Making Schema Changes on Publication Databases for more information.
I want to update a static table on my local development database with current values from our server (accessed on a different network/domain via VPN). Using the Data Import/Export wizard would be my method of choice, however I typically run into one of two issues:
I get primary key violation errors and the whole thing quits. This is because it's trying to insert rows that I already have.
If I set the "delete from target" option in the wizard, I get foreign key violation errors because there are rows in other tables that are referencing the values.
What I want is the correct set of options that means the Import/Export wizard will update rows that exist and insert rows that do not (based on primary key or by asking me which columns to use as the key).
How can I make this work? This is on SQL Server 2005 and 2008 (I'm sure it used to work okay on the SQL Server 2000 DTS wizard, too).
I'm not sure you can do this in management studio. I have had some good experiences with
RedGate SQL Data Compare in synchronising databases, but you do have to pay for it.
The SQL Server Database Publishing Wizard can export a set of sql insert scripts for the table that you are interested in. Just tell it to export just data and not schema. It'll also create the necessary drop statements.
One option is to download the data to a new table, then use commands similar to the following to update the target:
update target set
col1 = d.col1,
col2 = d.col2
from downloaded d
inner join target t on d.pk = t.pk
insert into target (col1, col2, ...)
select (d.col1, d.col2, ...) from downloaded d
where d.pk not in (select pk from target)
If you disable the FK constrains during the 2nd option - and resume them after finsih - it will work.
But if you are using identity to create pk that are involves in the FK - it will cause a problem, so it works only if the pk values remains the same.