How to solve Acumatica SQL Error After Upgrade - sql-server

I'm trying to update my client's Acumatica ERP to the latest version. I cloned the current instance to test drive the update procedure and make sure everything runs smoothly. They are currently using version 2019 R2 and want to update to 2020 R2.
Using the test instance, I updated it to the latest build of 2020 R2 and everything seems to be working except for one report. When I try to generate the Report I'm getting the following error.
I imagine this has to do with a change in the Database. However I can't find a table with that name either in the new database or in the current database. I'm not sure if that's table, store procedure, view, etc. I'm not very familiar with SQL.
I loaded the report in the report designer and try looking at the schema but couldn't find any reference to that particular table.
Any help would be greatly appreciated.
Regards.
CES

The SOAdjust table must exist in the database.
Please, try again with the following steps:
Create a snapshot of the client system.
Create a new system on the same version
Download and restore the snapshot created on the 1 point.
Download and install Acumatica 2020R2 ERP Configuration
Open the Acumatica ERP Configuration.
Select the system
For the upgrade procedure
7.1 Click the Update Only Database
7.2 Click the Update Only Website
In Acumatica 2019R2, the SOAdjust table is in two different namespaces.
PX.Objects.SO.SOOrderEntry.SOAdjust
PX.Objects.SO.SOAdjust
In Acumatica 2020R2, the SOAdjust table is in only one of them
PX.Objects.SO.SOAdjust
I think you should update the SOAdjust table in the report.

"view the namespaces in SQL Management Studio" - You don't. Namespaces are from .Net, and have to do with the code organization (crude description, but close enough for understanding). At a SQL level, the Acumatica structure is quite flat, just tables in the database (VERY few fancy sql tricks / sql level organization), all the "Real" logic tends to be in the business objects (Graphs, for the most part, though some interesting logic is within the DAC (data object classes))

Related

How to deploy DACPACs to transaction replicated databases

I am deploying a DACPAC via SqlPackage.exe to database servers that have a large volume of transaction replication in SQL Server. The DACPAC is built as the output of a SQL Server Database Project. When I attempt to deploy the DACPAC to the database with replication enabled the SqlPackage execution returns errors such as, Error SQL72035: [dbo].[SomeObject] is replicated and cannot be modified.
I found the parameter DoNotAlterReplicatedObjects which does not alter objects with replication turned on and would silence those errors, which isn't what I want to do. Instead, I want to alter all objects regardless of replication as part of the deployment.
The only option that I can think of to deploy the DACPAC to these replicated databases is to:
remove the replication through a script before deploying,
deploy the DACPAC via SqlPackage,
reconstruct the replication via scripts after deploying.
Unfortunately, the database is so heavily replicated that the step #3 above would take over 7 hours to complete. So this is not a practical solution.
Is there a better way to use SQL Server Database Projects and DACPACs to deploy to databases with a lot of replication?
Any assistance would be appreciated. Thank you in advance for your advice.
We solved the issue by doing the following. Hopefully this will work for others as well. The high level idea is that you need to disable "Do not ALTER replicated objects" and enable "Ignore column order".
There's a couple of ways to do this.
If you are using the SqlPackage tool in your deployment pipeline, then use the DoNotAlterReplicatedObjects and IgnoreColumnOrder properties see this link. So /p:DoNotAlterReplicatedObjects=False /p:IgnoreColumnOrder=True
If you are using C# or PowerShell Dac classes, then use the DacDeployOptions.DoNotAlterReplicatedObjects and DacDeployOptions.IgnoreColumnOrder properties.
You can directly modify the "Advanced Publish Setting" in Visual Studio IDE. You uncheck the Do not ALTER replicated objects checkbox and enable the Ignore column order checkbox. See this StackOverflow answer for how an example for the Ignore checkbox.
Our theory on why this works is that the alter table can only append a column to the end of a table so the only way to add a column to a specific position is to drop and recreate the table. The ignore tells the publisher to append the column to the end regardless of where I positioned the column in the script.
So the place this could be a problem is if you do an insert without specifying a column list because you expect the columns to be in a specific order and they're not.
Another potential side-effect that you could run in to is the table created by the DACPAC could have a different column order than the table altered by the DACPAC. We have been using this solution for a few months without issues, but the above are things to be aware of.
I hope that this helps.

SSDT SQL Server Data Tools Customer specific requirements

We are using SQL Server Data Tools (SSDT) to manage our customer databases.
In theory all databases are identical, but in practice we have a few stored procedures (and one trigger) that would change from one customer to another.
We created a main SSDT for everything common, and then one SSDT per customer containing only the specific stored procedures (no tables).
In the specific SSDTs we get warnings because SSDT can't find the tables referred in the stored procedures, but we can live with that (obviously SSDT won't be able to validate the table's fields since it can't find the table). For the trigger, we get an error (table can't be found), thus the database project doesn't compile.
How should we manage that? I guess we should not be alone in this situation.
Is there a way for a database project to refer objects (tables) from another database project ?
Thanks,
Yves Forget
Daniel N gave the right direction, I'll just explain. Let's say you have database project named DatabaseA which will contain the only objects that 100% match for every customer. Then you create another database project DatabaseB and include DatabaseA as "the same instance, the same database". In database DatabaseB you can add customer specific objects. Then you can create other database for other customer in a similar way.
IN SSDT you can add another database project or dacpac as a reference.
In the properties for the referenced project you will be able to set where the referenced database resides, same server same database, same server diff database etc
https://msdn.microsoft.com/en-us/library/jj684584%28v=vs.103%29.aspx?f=255&MSPPError=-2147217396

Rename live production Database

I have database in SQL Server called 'XYZ'. Now I want to change it to 'ABC'.
The problem is that my SSRS reports and SSIS packages are connected to the XYZ.
Everything that I have build SSRS reports & SSIS is now live, users using this Reports 24/7.
Is there any way to rename database with minimum/without any server/database downtime?
Thanks
Here's a Rube Goldberg approach:
Create a new, empty database that has the name you're ultimately intending on renaming your current database to (in your example "ABC")
Create a synonym in your new database for every object referenced by your SSIS packages and SSRS reports that uses a three-part name as the target. For example: create synonym [ABC].[dbo].[myTable] for [XYZ].[dbo].[myTable]
Update your packages and reports to point to the new database.
Under cover of darkness, rename ABC to ABC_drop and XYZ to ABC.
Drop ABC_drop.
It doesn't eliminate downtime, but does give you time to update all of the report and ETL package references. The rollback is also simple before step 5.

SSDT implementation: Alter table insteed of Create

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

VS2010 database project deploy rebuilds every table

I have recently created a database project in VS2010 for an existing SQL Server 2008 R2 DB. I have updated 1 table out of 11 by adding 3 new columns to the end. I then updated 4 views that referred to that table.
I then tried a Build/Deploy with it only generating a script.
I have inspected the script and for every single table in the DB, it has generated code that will create a temp version of each table, copy the data from the existing table, drop the original and rename the copy.
I saw the posting on here where it insisted on rebuilding the table for dropped columns and I tried setting the IgnoreColumnOrder but it didn't make any difference. It didn't seem relevant to my situation, anyway, so I wasn't surprised.
I created my DB project by getting the DBA to give me a fully scripted version of Production, built that DB on my PC version of SQL Server and then created my initial project from that. I don't think that would make any difference and I have compared the project definition of the tables to the target Dev DB and they are the same.
I have "Always recreate database" unticked and "Block incremental deployment if data loss might occur" ticked. Don't suppose they have anything to do with my issue?
Any ideas?
I found a backup of the database and as per Peter's suggestion, ran a Schema Compare. The difference turned out to be that the target DB had PAGE compression on most of the tables but that was not in the project definition.

Resources