VS2010 database project deploy rebuilds every table - sql-server

I have recently created a database project in VS2010 for an existing SQL Server 2008 R2 DB. I have updated 1 table out of 11 by adding 3 new columns to the end. I then updated 4 views that referred to that table.
I then tried a Build/Deploy with it only generating a script.
I have inspected the script and for every single table in the DB, it has generated code that will create a temp version of each table, copy the data from the existing table, drop the original and rename the copy.
I saw the posting on here where it insisted on rebuilding the table for dropped columns and I tried setting the IgnoreColumnOrder but it didn't make any difference. It didn't seem relevant to my situation, anyway, so I wasn't surprised.
I created my DB project by getting the DBA to give me a fully scripted version of Production, built that DB on my PC version of SQL Server and then created my initial project from that. I don't think that would make any difference and I have compared the project definition of the tables to the target Dev DB and they are the same.
I have "Always recreate database" unticked and "Block incremental deployment if data loss might occur" ticked. Don't suppose they have anything to do with my issue?
Any ideas?

I found a backup of the database and as per Peter's suggestion, ran a Schema Compare. The difference turned out to be that the target DB had PAGE compression on most of the tables but that was not in the project definition.

Related

How to deploy DACPACs to transaction replicated databases

I am deploying a DACPAC via SqlPackage.exe to database servers that have a large volume of transaction replication in SQL Server. The DACPAC is built as the output of a SQL Server Database Project. When I attempt to deploy the DACPAC to the database with replication enabled the SqlPackage execution returns errors such as, Error SQL72035: [dbo].[SomeObject] is replicated and cannot be modified.
I found the parameter DoNotAlterReplicatedObjects which does not alter objects with replication turned on and would silence those errors, which isn't what I want to do. Instead, I want to alter all objects regardless of replication as part of the deployment.
The only option that I can think of to deploy the DACPAC to these replicated databases is to:
remove the replication through a script before deploying,
deploy the DACPAC via SqlPackage,
reconstruct the replication via scripts after deploying.
Unfortunately, the database is so heavily replicated that the step #3 above would take over 7 hours to complete. So this is not a practical solution.
Is there a better way to use SQL Server Database Projects and DACPACs to deploy to databases with a lot of replication?
Any assistance would be appreciated. Thank you in advance for your advice.
We solved the issue by doing the following. Hopefully this will work for others as well. The high level idea is that you need to disable "Do not ALTER replicated objects" and enable "Ignore column order".
There's a couple of ways to do this.
If you are using the SqlPackage tool in your deployment pipeline, then use the DoNotAlterReplicatedObjects and IgnoreColumnOrder properties see this link. So /p:DoNotAlterReplicatedObjects=False /p:IgnoreColumnOrder=True
If you are using C# or PowerShell Dac classes, then use the DacDeployOptions.DoNotAlterReplicatedObjects and DacDeployOptions.IgnoreColumnOrder properties.
You can directly modify the "Advanced Publish Setting" in Visual Studio IDE. You uncheck the Do not ALTER replicated objects checkbox and enable the Ignore column order checkbox. See this StackOverflow answer for how an example for the Ignore checkbox.
Our theory on why this works is that the alter table can only append a column to the end of a table so the only way to add a column to a specific position is to drop and recreate the table. The ignore tells the publisher to append the column to the end regardless of where I positioned the column in the script.
So the place this could be a problem is if you do an insert without specifying a column list because you expect the columns to be in a specific order and they're not.
Another potential side-effect that you could run in to is the table created by the DACPAC could have a different column order than the table altered by the DACPAC. We have been using this solution for a few months without issues, but the above are things to be aware of.
I hope that this helps.

How to solve Acumatica SQL Error After Upgrade

I'm trying to update my client's Acumatica ERP to the latest version. I cloned the current instance to test drive the update procedure and make sure everything runs smoothly. They are currently using version 2019 R2 and want to update to 2020 R2.
Using the test instance, I updated it to the latest build of 2020 R2 and everything seems to be working except for one report. When I try to generate the Report I'm getting the following error.
I imagine this has to do with a change in the Database. However I can't find a table with that name either in the new database or in the current database. I'm not sure if that's table, store procedure, view, etc. I'm not very familiar with SQL.
I loaded the report in the report designer and try looking at the schema but couldn't find any reference to that particular table.
Any help would be greatly appreciated.
Regards.
CES
The SOAdjust table must exist in the database.
Please, try again with the following steps:
Create a snapshot of the client system.
Create a new system on the same version
Download and restore the snapshot created on the 1 point.
Download and install Acumatica 2020R2 ERP Configuration
Open the Acumatica ERP Configuration.
Select the system
For the upgrade procedure
7.1 Click the Update Only Database
7.2 Click the Update Only Website
In Acumatica 2019R2, the SOAdjust table is in two different namespaces.
PX.Objects.SO.SOOrderEntry.SOAdjust
PX.Objects.SO.SOAdjust
In Acumatica 2020R2, the SOAdjust table is in only one of them
PX.Objects.SO.SOAdjust
I think you should update the SOAdjust table in the report.
"view the namespaces in SQL Management Studio" - You don't. Namespaces are from .Net, and have to do with the code organization (crude description, but close enough for understanding). At a SQL level, the Acumatica structure is quite flat, just tables in the database (VERY few fancy sql tricks / sql level organization), all the "Real" logic tends to be in the business objects (Graphs, for the most part, though some interesting logic is within the DAC (data object classes))

SQL Refactor Rename Not Publishing Correctly

I'm trying to rename a few tables in one of my database projects. I right click and choose "Refactor" then choose "Rename". The Rename process appears to be working great! All references to the table are updated correctly and the refactorlog file is updated with an appropriate "Rename Refactor" operation.
However; when I generate a script to publish changes, the script simply creates a new table rather than going through the process of creating a new table, then copying the old table's data over, and finally (presumably) dropping the old table.
I've also tried just renaming a column on the table which results in a new column and dropping of the old one. The data should be copied over to the new column via a new table/identity insert/rename.
I've run a repair of SSDT just to be sure and had no success. Any advice is welcome!
-- Update --
I've not yet resolved this issue but it should be noted that the original DB project was created with an earlier version of visual studio (2010 regular) than we are currently using (2013 ultimate). The project was working in terms of refactoring in our current version of visual studio until recently.
After completely recreating my db project, I had some success but it was inconsistent. After a few publish tests, it started to act inconsistently Some refactors would take while others would drop the original table and then begin creating a new one with the new definition.
It turns out I had multiple versions of SSDT (SQL Server Data Tools) installed (2012 and 2013). I uninstalled 2012 and then ran a repair of 2013. Viola! Refactoring is now working again.

SSDT implementation: Alter table insteed of Create

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

How can I copy a SQL Server 2005 database from production to development?

We have a production SQL Server 2005 database server with the production version of our application's database on it. I would like to be able to copy down the data contents of the production database to a development server for testing.
Several sites (and Microsoft's forums) suggest using the Backup/Restore options to copy databases from one server from another, but this solution is unworkable for several reasons (I don't have backup authority on our production database, I don't want to overwrite permissions on the development server, I don't want to overwrite structure changes on the development server, etc...)
I've tried using the SQL Import/Export Wizard in SQL Server 2005, but it always reports primary key violations. How can I copy the contents of a database from the production server to development without using the "Backup/Restore" method?
Well without the proper rights it really becomes more tedious and less than ideal.
One way that I would recommend though is to drop all of your constraints and indexes and then add them again once the data has been imported/exported.
Not an elegant solution but it'll process really fast.
EDIT:
Another option is to create an SSIS package where you specifically dump the tables in an order that won't violate the constraints.
I often use SQL Data Compare (http://www.red-gate.com/products/sql_data_compare/index.htm) for this task: the synchronization scripts it writes will remove the relationships during the transfer and reapply them, but that is OK in most development cases. It works especially well with smaller databases or subsets of databases.
If your database is large, I would recommend finding someone with the keys to the kingdom. Doing an out of sequence backup could mess with the ability to restore the database from the primary backup (if they are doing partials during the week for example) by marking records backed up when they are only in your backup, so don't try to bypass that security if you are unsure why it is there.
Assuming that you can connect to both DB's from the same machine (which almost always you can - I do it with my production servers via a VPN).
For each table
DELETE FROM devserv.dbo.tablename;
SET identity_insert [devserv.dbo.tablename] ON;
INSERT into devserv.dbo.tablename SELECT * from prodserv.dbo.tablename;
SET identity_insert [devname.dbo.tablename] OFF;
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
The import/ export wizard is notorious for this sort of thing, and actually has a bug that makes it even less useful in working out the dependencies (sorry, don't have the details to hand).
SSIS does a much better job, but you'll have to add each table copy task by hand (in fact a datasource, copy task and data destination objects. It's a little tedious to set up (more than it should be), but a lot simpler than writing your own code.
One tip: avoid generating an SSIS project with the import/ export wizard, thinking it will be easier to just tweak it. It generates something that most people would find unrecognisable, even with some SSIS experience!
If you do not have backup permission on the production server, I guess this is because you are using a shared SQL Server from a webhoster. In this case, check if your webhoster provides the tool called myLittleBackup. It allows installing a db from one server to another in a few clicks...
I'd contact someone that does have access to backup the database. Permissions are usually there for a reason.
I might consider getting a backup as there will be one wether you run it or not (t least in theory a Prod DB is being backed up :) )
Then just restore to a brand new database on your dev box so you dont conflict with anything or anyone else.
If you restore to a new DB you could also pull the tables and data across manually if you wanted and since you create the DB you give yourself rights and it's all ok. There's a number of other methods, all tedious.
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
We just use the SQL Server Database Publishing Wizard at work.
You would use this little utility to generate a T-SQL script that describes your production database (including all its data). Then connect to your dev server and run the generated script.
If you have to avoid backup/restore this is what I would recommend (these steps assuming you don't want to maintain the old schema NAME, just the structure) -
Download opendbdiff. Choose 'Compare' between source and (empty) destination. Choose sync. script tab and copy only the create table rows (without dbo.sysdiagrams tables etc.) paste into sql managment studio new query, delete all the schemas names appearing before the table names.
Now you have the full structure including primary keys, identity etc. Next step - use sql server import and export data like you did before (make sure you choose edit mappings and choose destination schema as dbo etc.). Also make sure you tick drop and recreate destination table.
On your Dev machine, setup a linked server to your production machine. Then just
INSERT dev.db.dbo.table (fieldlist)
SELECT (fieldlist) from prod.db.dbo.table

Resources