Does Data Import Replace Existing Data in MySQL Workbench? - database

I'm working with a database in a development environment in MySQL Workbench. I have everything ready to go and need to move it to a prod database. I've exported it to a sql file but I'm unsure if I'm approaching the import the correct way.
If I use the "Data Import/Restore" feature, select my SQL file, and import, will it replace the existing data in the database (what I want to happen) or will it add new records to each table for the new data?
The schema is the same in each database. I just need to replace the old data in the prod database with the new data from dev.
Thanks for your help

That depends on how your export-file looks. Just open it in a text editor and read over the statements in your export-file.
By default it should contain statements like:
CREATE TABLE IF NOT EXISTS `customer` (
`CUSTOMER_ID` int(11) NOT NULL,
`CUSTOMER_NM` varchar(100) DEFAULT ''
) EN
and right after it the data of this table:
INSERT INTO `customer` (`CUSTOMER_ID`, `CUSTOMER_NM`) VALUES
(0, 'Dummy Customer');
(1, 'Dummy Two');
Since your tables already exist in your PROD-Environment it will not delete, create or replace them (Note the CREATE TABLE IF NOT EXISTS-Statement). The INSERT-Stamement will be executed (there is no condition which says it shouldn't).
So after importing your file you will have your previous PROD-Data in your database + the imported DEV-Data your your DEV-Environment.
On the other hand it could contain a statement like:
DROP TABLE IF EXISTS `customer`
And right after it the CREATE-Statement followed by some INSERT-Statements. In this case your whole PROD-Database will be replaced by the DEV-Database as you want it to.

You can use MySQL specific replace statement to achieve this goal. check out this link

Related

Enable identity insert is not working when importing data

I am trying to import many tables from access db to MS SQL server using the import wizard.
Some rows in source tables has been deleted so the sequence of IDs are like this: 2,3,5,8,9,12,...
but when I import the data into my destination, the IDs start from 1 and increment by 1, so they don't exactly match with source data.
I even check the "Enable Identity insert" but it does not help.
The only work around I have found is to change the IDs in destination tables from Identity to integer one by one, then import, and then change them back to identity, which is very time consuming.
Is there any better way to do this?
If you want to insert an id in the identity column, you need to use:
SET IDENTITY_INSERT table_name ON
https://msdn.microsoft.com/es-us/library/ms188059.aspx
Remember to set it OFF at the end of the script.

Append one ore more rows from Excel to existing SQL Server table

I have imported an Excel table into SQL Server 2016 Express and made a table using the import wizard.
Now when I update my Excel sheet, e.g add one ore more row to it, I also want to update my SQL Server table, that means, adding the one ore more rows to the table as well. What's the easiest way to do this? In an "append" rows manner. I don't want to add the whole Excel sheet again..
You asked for the easiest solution. Since you are already comfortable using the wizard it would seem to me that the easiest way is to import the "updated" Excel sheet / file into SQL Server Express using the wizard as well. Yet, import it into a new table (without removing the old one).
Afterwards, insert new rows or update the existing records on the SQL Server with a simple SQL MERGE statement. Afterwards, you can drop / delete the imported table again (because the existing table has been updated).
While I do not know your tables the following SQL code sample shows a simple merge on a basic customer table where tblCustomers would be the exiting table (to be updated / insert new rows) and tblCustomersNEW would be the new import (which will be deleted again once the update / append is complete):
merge dbo.tblCustomers as target
using dbo.tblCustomersNEW as source
on source.ID = target.ID
when matched then
update set target.Name = source.Name,
target.Country = source.Country
when not matched by target then
insert (Name, Country)
values (
source.Name,
source.Country
);
Note, that the MERGE statement requires a semicolon at the end similar to CTE requiring a semicolon before you start ; With myTable as....
For more information on the MERGE statement you might want to read the following on article on MSDN: https://msdn.microsoft.com/en-us/library/bb510625.aspx

DACPAC schema compare runs before pre-deployment scripts during publish

When publishing a dacpac with sqlpackage.exe, it runs Schema Compare first, followed by pre-deployment scripts. This causes a problem when, for instance, you need to drop a table or rename a column. Schema Compare was done before the object was modified and the deployment fails. Publish must be repeated to take the new schema into account.
Anyone have a work-around for this that does not involve publishing twice?
Gert Drapers called it as pre-pre-deployment script here
Actually it is a challenge. If you need to add non-nullable and foreign key column to a table full of data - you can do with a separate script only.
If you are the only developer - that is not a problem, but when you have a large team that "separate script" has to be somehow executed before every DB publish.
The workaround we used:
Create separate SQL "Before-publish" script (in DB project) which has a property [Build action = None]
Create custom MSBuild Task where to call SQLCMD.EXE utility passing "Before-publish" script as a parameter, and then to call SQLPACKAGE.EXE utility passing DB.dacpac
Add a call of the custom MSBuild Task to db.sqlproj file. For example:
<UsingTask
TaskName="MSBuild.MsSql.DeployTask"
AssemblyFile="$(MSBuildProjectDirectory)\Deploy\MsBuild.MsSql.DeployTask.dll" />
<Target Name="AfterBuild">
<DeployTask
Configuration="$(Configuration)"
DeployConfigPath="$(MSBuildProjectDirectory)\Deploy\Deploy.config"
ProjectDirectory="$(MSBuildProjectDirectory)"
OutputDirectory="$(OutputPath)"
DacVersion="$(DacVersion)">
</DeployTask>
</Target>
MsBuild.MsSql.DeployTask.dll above is that custom MSBuild Task.
Thus the "Before-publish" script could be called from Visual Studio.
For CI we used a batch file (*.bat) where the same two utilities (SQLCMD.EXE & SQLPACKAGE.EXE) were called.
The final process we've got is a little bit complicated and should be described in a separate article - here I mentioned a direction only :)
Move from using visual studio to using scripts that drive sqlpackage.exe and you have the flexibility to run scripts before the compare:
https://the.agilesql.club/Blog/Ed-Elliott/Pre-Deploy-Scripts-In-SSDT-When-Are-They-Run
ed
We faced a situation when we need to transform data from one table into other during deployment of the database project. Of course it is a problem to do using the DB project due to in the pre-deployment the destination table (column) still doesn't exist but in post-deployment script the source table (column) is already absent.
To transform data from TableA to TableB we used the following idea (This approach can be used for any data modifications):
Developer adds destination table (dbo.TableB) into the DB project and deploys it onto the local DB (without committing to a SVN)
He or she creates a pre-deployment transformation script. The trick is that the script put the result data into a temporary table: #TableB
Developer deletes the dbo.TableA in the DB project. It is assumed that the table will be deleted during execution of the main generated script.
Developer writes a post-deployment script that copies data form #TableB to dbo.TableB that was just created by the main script.
All of the changes are committed into the SVN.
This way we don't need the pre-pre-deployment script due to we store the intermediate data in the temporary table.
I'd like to say that the approach that uses the pre-pre-deployment script had the same intermediate (temporary) data, however it is stored not in temporary tables but in real tables. It happens between pre-pre-deployment and pre-deployment. After execution of pre-deployment script this intermediate data disappears.
What is more, the approach with using temporary tables allows us to face the following complicated but real situation: Imagine that we have two transformations in our DB project:
TableA -> TableB
TableB -> TableC
Apart from that we have two databases:
DatabaeA that have the TableA
DatabaeB where the TableA was already transformed into the TableB. The TableA is absent in the DatabaseB.
Nonetheless we can deal this situation. We need just one new action in the pre-deployment. Before the transformation we try to copy data form the dbo.TableA into #TableA. And the transformation script works with temporary tables only.
Let me show you how this idea works in DatabaseA and DatabaseB.
It is assumed that the DB project has two couples of the pre and post deployment scripts: "TableA -> TableB" and "TableB -> TableC".
Below is the example of the scripts for "TableB -> TableC" transformation.
Pre-deployment script
----[The data preparation block]---
--We must prepare to possible transformation
--The condition should verufy the existance of necessary columns
IF OBJECT_ID('dbo.TableB') IS NOT NULL AND
OBJECT_ID('tempdb..#TableB') IS NULL
BEGIN
CREATE TABLE #TableB
(
[Id] INT NOT NULL PRIMARY KEY,
[Value1] VARCHAR(50) NULL,
[Value2] VARCHAR(50) NULL
)
INSERT INTO [#TableB]
SELECT [Id], [Value1], [Value2]
FROM dbo.TableB
END
----[The data transformation block]---
--The condition of the transformation start
--It is very important. It must be as strict as posible to ward off wrong executions.
--The condition should verufy the existance of necessary columns
--Note that the condition and the transformation must use the #TableA instead of dbo.TableA
IF OBJECT_ID('tempdb..#TableB') IS NOT NULL
BEGIN
CREATE TABLE [#TableC]
(
[Id] INT NOT NULL PRIMARY KEY,
[Value] VARCHAR(50) NULL
)
--Data transformation. The source and destimation tables must be temporary tables.
INSERT INTO [#TableC]
SELECT [Id], Value1 + ' '+ Value2 as Value
FROM [#TableB]
END
Post-deployment script
--Here must be a strict condition to ward of a failure
--Checking of the existance of fields is a good idea
IF OBJECT_ID('dbo.TableC') IS NOT NULL AND
OBJECT_ID('tempdb..#TableC') IS NOT NULL
BEGIN
INSERT INTO [TableC]
SELECT [Id], [Value]
FROM [#TableC]
END
In the DatabaseA the pre-deployment script has already created the #TableA. Therefore the data preparation block won't be executed due to there is no dbo.TableB in the database.
However the data transformation will be executed because there is the #TableA in the database that was created by the transformation block of the "TableA -> TableB".
In the DatabaseB the data preparation and transformation blocks for the "TableA -> TableB" script won't be executed. However we already have the the transformed data in the dbo.TableB. Hence the the data preparation and transformation blocks for the "TableB -> TableC" will be executed without any problem.
I use the below work around in such scenarios
If you would like to drop a table
Retain the table within the dacpac (Under Tables folder).
Create a post deployment script to drop the table.
If you would like to drop a column
Retain the column in the table definition within dacpac (Under Tables folder).
Create a post deployment script to drop the column.
This way you can drop tables and columns from your database and whenever you make the next deployment ( may be after few days or even months) exclude that table/columns from dacpac so that dacpac is updated with the latest schema.

How to merge table from access to SQL Express?

I have one table named "Staff" in access and also have this table(same name) in SQL 2008.
Both table have thousands of records. I want to merge records from the access table to sql table without affecting the existing records in sql. Normally, I just export using OCBC driver and that works fine if that table doesn't exist in sql server. Please advise. Thanks.
A simple append query from the local access table to the linked sql server table should work just fine in this case.
So, just drop in the first (from) table into the query builder. Then change the query type to append, and you are prompted for the append table name.
From that point on, just drop in the columns you want (do not drop in the PK column, as they need not be used nor transferred in this case).
You can also type in the sql directly in the query builder. Either way, you will wind up with something like:
INSERT INTO dbo_custsql
( ADMINID, Amount, Notes, Status )
SELECT ADMINID, Amount, Notes, Status
FROM custsql1;
This may help: http://www.red-gate.com/products/sql-development/sql-compare/
Or you could write a simple program to read from each data set and do the comparison, adding, updating, and deleting, etc.

SQL Server: Copying table contents from one database to another

I want to update a static table on my local development database with current values from our server (accessed on a different network/domain via VPN). Using the Data Import/Export wizard would be my method of choice, however I typically run into one of two issues:
I get primary key violation errors and the whole thing quits. This is because it's trying to insert rows that I already have.
If I set the "delete from target" option in the wizard, I get foreign key violation errors because there are rows in other tables that are referencing the values.
What I want is the correct set of options that means the Import/Export wizard will update rows that exist and insert rows that do not (based on primary key or by asking me which columns to use as the key).
How can I make this work? This is on SQL Server 2005 and 2008 (I'm sure it used to work okay on the SQL Server 2000 DTS wizard, too).
I'm not sure you can do this in management studio. I have had some good experiences with
RedGate SQL Data Compare in synchronising databases, but you do have to pay for it.
The SQL Server Database Publishing Wizard can export a set of sql insert scripts for the table that you are interested in. Just tell it to export just data and not schema. It'll also create the necessary drop statements.
One option is to download the data to a new table, then use commands similar to the following to update the target:
update target set
col1 = d.col1,
col2 = d.col2
from downloaded d
inner join target t on d.pk = t.pk
insert into target (col1, col2, ...)
select (d.col1, d.col2, ...) from downloaded d
where d.pk not in (select pk from target)
If you disable the FK constrains during the 2nd option - and resume them after finsih - it will work.
But if you are using identity to create pk that are involves in the FK - it will cause a problem, so it works only if the pk values remains the same.

Resources