Where are ActiveObjects entity schema versions stored? - database

I implemented ActiveObjectsUpgradeTask. Data migrated. I want to rollback this migration, but I don't know where the migration version numbers or version schema table are in ActiveObjects.
Didn't find anything in the documentation for the whole day.
Perhaps I misunderstand how structure migration works in AO. I just assume that the versions returned by the getModelVersion() method are stored somewhere.
Thanks in advance for your replies

How to check if update tasks ran successfully.
My search was successful. Based on the above article, version numbers of models in AO are stored in a table called propertyentry.

Related

My MSSQL production database suddently recreated all tables in new schema with NO data :-(

In our system, we suddenly lost all data. Noone could log into the system and the data was suddenly empty.
A closer look into the database showed, that there is now double the tables that there was before, just with another schema name in front of it instead of the standard DBO, it is not Timenord.tablename. All the data is still in the DBO tables, but the system is trying to use the new tables.
We havent been making any update to the system for a couple of days, so why this sudden behavior?
How do I fix it? Im using .net core 2.2, and i have never seen this issue before.
Dont even know how to tell the system to use the other schema.....
From comments above, temporary fix at least till you can find the cause of the issue is to set default schema:
How to set the default schema of a database in SQL Server 2005?
The default schema should look at a specific schema first (if you dont specify in your code like this dbo.TableName) so it will look for dbo. and if it finds the object should use that one.

Compare SQL Server DB schema & data (at the same time) and generate scripts

I've got a reasonably large / complicated DB which I need to upgrade in the field from version 1 to version 2. There's a lot of changes in schema and importantly data between the two.
Yes, I know this should have been version controlled alla:
http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
but it wasn't - it will be when I am done.
So, current problem, I'm face with the choice of either going through all the commits or trying to diff between two versions of the db. So far I've tried:
http://opendbiff.codeplex.com/
http://www.sqldelta.com/
http://www.red-gate.com/
However none of them seem to be able to successfully generate schema upgrade scripts because they don't also do the data at the same time. This results in foreign key violations when adding new keys to tables as the table it references is new and while the schema for the table has been created, the data whcih it contains has not. Well it could be, but that requires me to use a different part of the tool and then mix together the two scripts.
I know this may look like a duplicate of:
What is best tool to compare two SQL Server databases (schema and data)?
which is where I found most of the existing tools I've tried, but so far I've not managed to get any of these to produce a working schema migration script (I'm really not too fussed about the data, but I do need the data which is required for foreign keys - which tbh is all the difference as I've deploy old version and new version).
Am I expecting too much?
Should I give up and start manually stitching together what I do have?
Or do I go through all the commits and manually create upgrade scripts?
I can't think of more powerful tools available than the ones you seem to have tried. If those fail, my homegrown versioning system probably won't help you much either.
However, you should be able to generate an update script and then manually edit it to add the data transformations to it.
And/or you could disable the foreign key constraints for the time that the update script runs.
There is no such thing as doing schema and data "at the same time". Even if you have them in one big script you would still be doing the schema first and then the data. If the schema script creates a new table and adds a constraint to it there is no reason you should get a referential integrity violation error as there are no rows in those tables.
In any case, you should give our xSQL Schema Compare and Data Compare tools a try, you will be impressed with the performance and the level of control you get.

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

How to partially migrate a database to a new system over time?

We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.

Updating client SQL Server database structure from text file

We have a "master database structure", and need a routine to keep the database structure on client sites up-to-date.
A number of suggestions have been given to a related question, but I am looking for a more specific solution, along these lines:
I would like to generate a text file (XML or other readable format) which describes the entire database structure (this could go into version control). This routine will run in-house, to provide a database schema file to be distributed with the next version of our product.
Then I need a way to update the database structure on the client site so that it corresponds to the master database structure. (In other words, I don't want to have to keep track of numerous change scripts for different versions of the database structure, but a more general routine which can get the client database structure updated to the current master database structure.)
So the main feature I'm looking for could be described as "database structure to text" and "text to database structure".
There are a whole lot of diff tools that can give you schema and stored procedures and constraint differences between two databases. You could roll your own, but I think it would be more expensive than one of these tools if you have a complex schema, many give a free trial so you can test.
The problem is you'd have to have the master database online to do so though and accessible from the client database installation (or install it there) which might or might not be feasible.
If you won't do that, the only other sane option I can think of is to use the migration idea, keep a list of SQL scripts + version pairs, plus current version on each database. This could be consolidated by a different tool that could generate a single script from a the client's database version number and the list of changes. And if you haven't the list of changes, you can start with a diff tool run, and keep track of them from there.
The comparing text route (comparing text SQL dumps of both schemas) you seem to prefer looks very hard to do it right and automatically to me, doesn't look like the right path to take.
Several popular strategies are variants of this:
Add a table to the database:
CREATE TABLE Release
(release_number int not null,
applied datetime not null
)
Each release, as part of its release script inserts a row into this table.
You can now find out with a single query which release each client is running, and run all the releases between that one and the release they want to be running.
In addition, you could check that their schema is correct for each version (correct table names, columns, etc.) by doing something like this:
SELECT so.name,
sc.name
FROM sysobjects so,
syscolumns sc
WHERE type = 'U'
ORDER BY 1, 2
then calculate a hash of the result and compare it with a pre-computed hash (generated by running the query on your reference installation) to see if the installation is now correct.

Resources