Database applies ALL previous migrations on update and NOT just the new one - database

I'm developing a website which, as of current, both has a production and a test database.
The production database is hosted externally while the test database is hosted locally.
Whenever I make changes to my database I apply the changes through a migration.
After having added a new migration I run the update-database command on both my production and test database to keep them in sync.
I applied the migration just fine to my production database, however, when I wanna apply the migration to my test database I see that it attempts to apply ALL the previous migrations (and not just the new one):
Here is the output:
Applying explicit migrations: [201603230047093_Initial,
201603232305269_AddedBlobNameToImage,
201603242121190_RemovedSourceFromRealestateDbTable,
201603311617077_AddedSourceUrlId,
201604012033331_AddedIndexProfileAndFacebookNotifications,
201604012233271_RemovedTenantIndexProfile,
201604042359214_AddRealestateFilter]. Applying explicit migration:
201603230047093_Initial. System.Data.SqlClient.SqlException
(0x80131904): There is already an object named 'Cities' in the
database.
Obviously it fails since the current state of the database is at the second latest migration. However I wonder why it attempts to apply ALL the previous migrations?
Unlike the production database (which has had all the migrations applied one at a time), the test database was deleted and created at the previous migration so its migration history table only contains one row:
201604012239054_InitialCreate
(I assume InitialCreate is an auto generated name of all the previous migrations combined).
In summary:
Why is the test database trying to apply ALL the previous migrations instead of just the newly added?
EDIT:
When running COMMMAND I get the follow output script:
DECLARE #CurrentMigration [nvarchar](max)
IF object_id('[dbo].[__MigrationHistory]') IS NOT NULL
SELECT #CurrentMigration =
(SELECT TOP (1)
[Project1].[MigrationId] AS [MigrationId]
FROM ( SELECT
[Extent1].[MigrationId] AS [MigrationId]
FROM [dbo].[__MigrationHistory] AS [Extent1]
WHERE [Extent1].[ContextKey] = N'Boligside.Migrations.Configuration'
) AS [Project1]
ORDER BY [Project1].[MigrationId] DESC)
IF #CurrentMigration IS NULL
SET #CurrentMigration = '0'
IF #CurrentMigration < '201603230047093_Initial'
(it proceeds making if statements for each previous migration)
The current migrations table in my database looks the following (note that the first row is for a logging framework so it's not related):

One issue that can cause migrations to rerun is if your context key changes which can happen during refactoring. There are a couple of ways to solve this:
1) Update the old records in __MigrationHistory with the new values:
UPDATE [dbo].[__MigrationHistory]
SET [ContextKey] = ‘New_Namespace.Migrations.Configuration’
WHERE [ContextKey] = ‘Old_Namespace.Migrations.Configuration’
2) You can hard code the old context key into the constructor of your migration Configuration class:
public Configuration()
{
AutomaticMigrationsEnabled = false;
this.ContextKey = “Old_Namespace.Migrations.Configuration”;
}
Here is a good article on how migrations run under the hood: https://msdn.microsoft.com/en-US/data/dn481501?f=255&MSPPError=-2147217396
See also http://jameschambers.com/2014/02/changing-the-namespace-with-entity-framework-6-0-code-first-databases/

Related

IdentityServer4: what is breaking changes from version 2 and/or 3 to version 4?

Where can I find breaking changes list from version 2 and/or 3 to version 4 of IdentityServer4?
I am trying to upgrade a project with IdentityServer4 version 2 to version 4.
I made a migration from 2.2 to 4.1.2 version two years ago and I remember I found some breaking changes. Others were imposed by the upgrade of framework I made (net core 2.1 to 3.1). These are some of the changes related only to IdentityServer4:
Database scheme. If you have data in production you'll need to maintain this data in the migration. Automatic migrations will delete and recreate tables mercilessly. There are table and column renames, new columns, deleted columns, changes in the indexes...
Client Cors Origins validation. There is a new validation to force all Url configured in ClientCorsOrigins to complain with an Origin format. If only one of them does not comply with the format, an exception is thrown. You should review your production values to avoid fails.
// format:
<protocol>://<domain>:<port>
// good examples:
http://localhost:5000
http://example.com
https://anotherdomain.com
http://example.com:1234
// bad examples
http://example.com/
https://example.com/mypath
example.com:1234
Some code changes:
AuthorizationRequest.ClientId -> AuthorizationRequest.Client.ClientId.
ResourceValidationResult groups ApiResources and IdentityResources properties in a common property called Resources.
ValidatedTokenRequest changes his property Scopes to RequestedScopes.
GetAllUserConsentsAsync -> GetAllUserGrantsAsync.
In your UI some ModelViews will need to be updated to the new scheme. If you stared with the QuickStart.UI you can compare with the new version to add the new features.
If you have an admin you'll have to adapt it to the new scheme.
Migrations
I created the migrations automatically and then I edited the migration to reorder and to add manual scripts to save the data (For example, create a table before deleting the old one and move the data).
These are the scripts I had to insert manually for the Up migration.
Reorder the code to create ApiResourceScopes table before deleting column ApiResourceId from ApiScopes table.
Insert Into [ApiResourceScopes] ([ApiResourceId], [Scope]) Select [ApiResourceId], [Name] From [ApiScopes]
As ApiScopes has a new field called Enabled and by default takes 0, you'll want to enable for all of them. Run this script just before create the Enabled column:
Update [ApiScopes] set [Enabled] = 1
ApiSecrets must be moved to the new table ApiResourceSecrets. So you should run this script before delete ApiSecrets:
Insert Into [ApiResourceSecrets] ([Description], [Value], [Expiration], [Type], [ApiResourceId], [Created]) Select [Description], [Value], [Expiration], [Type], [ApiResourceId], GetDate() From [ApiSecrets]
The table IdentityClaims is renamed to IdentityResourceClaims. So you'll need to run this script after crate IdentityResourceClaims and before delete IdentityClaims.
Insert Into [IdentityResourceClaims] ([Type], [IdentityResourceId]) Select [Type], [IdentityResourceId] From [IdentityClaims]
For the Down migration you need to do exactly the reverse:
Restore ApiScopes. Move the data from ApiResourceScopes.ApiResourceId by using in the join the Scope and the Name fields respectively.
Update [ApiScopes] Set [ApiScopes].[ApiResourceId] = apir.[ApiResourceId] from [ApiScopes] apis Inner Join [ApiResourceScopes] apir On apis.[Name] = apir.[Scope]
Restore ApiSecrets. Move the data after create ApiSecrets table and before delete ApiResourceSecrets:
Insert Into [ApiSecrets] ([Description], [Value], [Expiration], [Type], [ApiResourceId]) Select [Description], [Value], [Expiration], [Type], [ApiResourceId] From [ApiResourceSecrets]
Restore IdentityClaims. Move the data after create IdentityClaims and before delete IdentityResourceClaims:
Insert Into [IdentityClaims] ([Type], [IdentityResourceId]) Select [Type], [IdentityResourceId] From [IdentityResourceClaims]

How to reset SQLite autoincrement while using Flyway in Unit test

Consider the code below in a unit test, where I add a new Tag object in a pre-populated SQLite database.
#Test // Line 1
public void add() {
Tag tagToAdd = new Tag("Tall");
Tag addedTag = this.tagDao.add(tagToAdd);
assertNotNull(addedTag);
assertEquals(3L, addedTag.getId()); // Line 6
assertEquals(tagToAdd.getTag(), addedTag.getTag());
List<Tag> tags = this.tagDao.get();
assertEquals(3, tags.size());
}
On line 6, I expect the ID of the Tag to be 3, because the field is an AUTOINCREMENT and the test is initialized with a database already containing 2 Tags. This works fine every time I run the test and the ID is always 3.
Now, I am integrating flyway to the project. Every time I run the test, the AUTOINCREMENT starts from the value of the last run, so the Tag ID increments by 1 every run, and the test fails.
Any idea on how I can get flyway to always reset the database to a brand new state, and reset the AUTOINCREMENT value ? I could write a query to do it manually, but this is not maintainable.
What I have tried so far ?
Integrate #FlywayTest, as this executes flyway task clean
Defined a FlywayMigrationStrategy bean, which contains flyway.clean()
Set spring.flyway.clean-on-validation-error to true in my application.properties (that said, there was no change in my sql, so not sure if this changed anything)
-- Edit
My 1st migration script contains the below.
DROP TABLE IF EXISTS Tag;
CREATE TABLE Tag(
id INTEGER PRIMARY KEY AUTOINCREMENT,
tag VARCHAR(255) NOT NULL UNIQUE,
createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
modifiedDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
If I understood everything correctly - you have a database and a table in this database which is created once and the same table is used for tests every time - you just delete rows from the table (without removing it) when tests are completed (or before starting next tests) and flyway just inserts two tags into this table every time you run the tests.
If that's right - you can just reset sequence in SQLite to set it back to 1 so next inserted row will be inserted with this id. You can do it by running the following query:
UPDATE `sqlite_sequence` SET `seq` = 1 WHERE `name` = 'tags_table_name';
Alternatively, you can set seq to 0 - this value is incorrect so SQLite will use next available correct value (if there are no rows in the table - it will be one, if there are some values - it will first available number).
Yet another possibility is just to delete your table after tests and recreate it before running next tests - as it is a database and table just for tests - it should work correctly. This way you have your sequence counter set back to value 1 each time. I would actually go this way until you have really good reason not to delete the table.

Audit trail with Entity Framework Core

I have an ASP.NET core 2.0 using Entity Framework core on a SQL Server db.
I have to trace and audit all the stuff made by the users on the data. My goal is to have an automatic mechanism writing all what is happening.
For example, if I have the table Animals, I want a parallele table "Audit_animals" where you can find all the info about the data, the operation type (add, delete, edit) and the user who made this.
I already made this time ago in Django + MySQL, but now the environment is different. I found this and it seems interesting, but I'd like to know if there are better ways and which is the best approach to do this in EF Core.
UPDATE
I'm trying this and something happens, but I have some problems.
I added this:
services.AddMvc().AddJsonOptions(options => {
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
});
public Mydb_Context(DbContextOptions<isMultiPayOnLine_Context> options) : base(options)
{
Audit.EntityFramework.Configuration.Setup()
.ForContext<Mydb_Context>(config => config
.IncludeEntityObjects()
.AuditEventType("Mydb_Context:Mydb"))
.UseOptOut()
}
public MyRepository(Mydb_Context context)
{
_context = context;
_context.AddAuditCustomField("UserName", "pippo");
}
I also created a table to insert the audits (only one to test this tool), but the only thing I got is what you see in the image. A list of json files with the data I created.... why??
Read the documentation:
Event Output
To configure the output persistence mechanism please see Configuration and Data Providers sections.
Then, in the documentation on Configuration:
If you don't specify a Data Provider, a default FileDataProvider will be used to write the events as .json files into the current working directory. (emphasis mine)
Long and short, follow the documentation to configure the data provider you'd like to use.
If you are going to map the audit table (Audit_Animals) to the same EF context as the audited Animals table, you can use the EntityFramework Data Provider included on the same Audit.EntityFramework library.
Check the documentation here:
Entity Framework Data Provider
If you plan to store the audit logs in
the same database as the audited entities, you can use the
EntityFrameworkDataProvider. Use this if you plan to store the audit
trails for each entity type in a table with similar structure.
There is another library that can audit EF contexts in a similar way, take a look: zzzprojects/EntityFramework-Plus.
Cannot recommend one over the other since they provide different features (and I'm the owner of the audit.net library).
Update:
.NET 6 and Entity Framework Core 6.0 supports SQL Server temporal tables out of the box.
See this answer for examples:
https://stackoverflow.com/a/70017768/3850405
Original:
You could have a look at Temporal tables (system-versioned temporal tables) if you are using SQL Server 2016< or Azure SQL.
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-ver15
From documentation:
Database feature that brings built-in support for providing
information about data stored in the table at any point in time rather
than only the data that is correct at the current moment in time.
Temporal is a database feature that was introduced in ANSI SQL 2011.
There is currently an open issue to support this out of the box:
https://github.com/dotnet/efcore/issues/4693
There are third party options available today but since they are not from Microsoft it is of course a risk that they won't be supported in future versions.
https://github.com/Adam-Langley/efcore-temporal-query
https://github.com/findulov/EntityFrameworkCore.TemporalTables
I solved it like this:
If you use the included Visual Studio 2019 LocalDB (Microsoft SQL Server 2016 (13.1.4001.0 LocalDB) you will need to upgrade if you use cascading DELETE or UPDATE. This is because Temporal tables with cascading actions is not supported in that version.
Complete guide for upgrading here:
https://stackoverflow.com/a/64210519/3850405
Start by adding a new empty migration. I prefer to use Package Manager Console (PMC):
Add-Migration "Temporal tables"
Should look like this:
public partial class Temporaltables : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
}
protected override void Down(MigrationBuilder migrationBuilder)
{
}
}
Then edit the migration like this:
public partial class Temporaltables : Migration
{
List<string> tablesToUpdate = new List<string>
{
"Images",
"Languages",
"Questions",
"Texts",
"Medias",
};
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.Sql($"CREATE SCHEMA History");
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] ADD SysStartTime datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN
CONSTRAINT DF_{table}_SysStart DEFAULT GETDATE(), SysEndTime datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN
CONSTRAINT DF_{table}_SysEnd DEFAULT CONVERT(datetime2 (0), '9999-12-31 23:59:59'),
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = History.[{table}]));";
migrationBuilder.Sql(alterStatement);
}
}
protected override void Down(MigrationBuilder migrationBuilder)
{
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = OFF);";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP PERIOD FOR SYSTEM_TIME";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP DF_{table}_SysStart, DF_{table}_SysEnd";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP COLUMN SysStartTime, COLUMN SysEndTime";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"DROP TABLE History.[{table}]";
migrationBuilder.Sql(alterStatement);
}
migrationBuilder.Sql($"DROP SCHEMA History");
}
}
tablesToUpdate should contain every table you need history for.
Then run Update-Database command.
Original source, a bit modified with escaping tables with square brackets etc:
https://intellitect.com/updating-sql-database-use-temporal-tables-entity-framework-migration/
Testing Create, Update and Delete will then show a complete history.
[HttpGet]
public async Task<ActionResult<string>> Test()
{
var identifier1 = "OATestar123";
var identifier2 = "OATestar12345";
var newQuestion = new Question()
{
Identifier = identifier1
};
_dbContext.Questions.Add(newQuestion);
await _dbContext.SaveChangesAsync();
var question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier1);
question.Identifier = identifier2;
await _dbContext.SaveChangesAsync();
question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier2);
_dbContext.Entry(question).State = EntityState.Deleted;
await _dbContext.SaveChangesAsync();
return Ok();
}
Tested a few times but the log will look like this:
This solution has a huge advantage IMAO that it is not Object Relational Mapper (ORM) specific and you will even get history if you write plain SQL.
The History tables are also read only by default so less chance of a corrupt audit trail. Error received: Cannot update rows in a temporal history table ''
If you need access to the data you can use your preferred ORM to fetch it or audit via SQL.

Update uses previously autogenerated ID on Oracle

We're having a strange problem in Oracle. I'll sketch some (simplified) context first:
Consider this mapping to an Entity:
public EntityMap()
{
Table("EntityTable");
Id(x => x.Id)
.Column("entityID")
.GeneratedBy.Native("ENTITYID").UnsavedValue(0);
Map(x => x.SomeBoolean).Column("SomeBoolean");
}
and this code:
var entity = new Entity();
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
Session.Save(entity);
transaction.Complete();
}
//A lot of code
if(someCondition)
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
enitity.SomeBoolean = true;
Session.Update(entity);
transaction.Complete();
}
}
This code is called a few times. The first time it generates the following queries:
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1216)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1216
The second time it is called these queries are generated (someCondition is false)
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1217)
And now the trouble begins. From now on, each insert will use the correct autoincremented value, but the update will always use 1217
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1218)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1217
And of course, this is not what we want to happen. If I inspect the value of the Id while debugging, it contains the correct autoincremented value. Somehow, deep in the bowels of NHibernate, the incorrect is is assigned to the WHERE clause...
The strange part is that this only happens on Oracle. If I switch NHibernate to MsSql, everything works like a charm.
So I found out what happened. NHibernate changed it's default connection release mode between versions 1.x and 2.x. Instead of closing the connection when the session is Disposed, the connections is now closed after each transaction. However, we were manually coordinating our transactions which apparently caused troubles in Oracle.
This question has some extra information and this entry in the NHibernate documentation also clarifies how the connections are handeled:
As of NHibernate, if your application manages transactions through .NET APIs such as System.Transactions library, ConnectionReleaseMode.AfterTransaction may cause NHibernate to open and close several connections during one transaction, leading to unnecessary overhead and transaction promotion from local to distributed. Specifying ConnectionReleaseMode.OnClose will revert to the legacy behavior and prevent this problem from occuring.
This blog post is what got me looking in the right direction.

SQL Server CE database and ADO.Net Entity Framework - data not saving

I have a SQL Server CE database file and I have ADO.NET Entity Framework object named History. I perform the following to get the latest ID:
var historyid = (from id in db.Histories // Get the current highest history Id
select (Int32?)(id.Id)).Max();
HistoryID = Convert.ToInt32(historyid);
if (HistoryID == 0) // No current record exists
{
HistoryID = 1; // Set the starter history Id
}
(Not the best way, but it is just to test).
This works well and returns the correct value (If I manually enter an ID of 1, it returns 1, same with 2 etc..)
The issue is with the following code:
History history = new History();
history.Id = 1;
history.SoftwareInstalled = "test";
db.AddToHistories(history);
db.SaveChanges();
The above works and runs through fine, but the changes are not saved to the database! Why is this?
Here is my db variable: dbDeSluggerEntities db = new dbDeSluggerEntities();
Thanks.
Edit: Here is an update: I noticed when I run the program and debug it, it shows that a history has been created and inserted and also gives an error of the primary key already existing, this happens each time I run the application (over and over). However, when I go to server explorer and right click the db and select 'Show Table Data' it shows no data.. When the program is run again, there is no primary key error or history's showing in the debug!
I was having exactly the same problem. By default the database is copied into your bin folder when you run the app and that is used.
My solution was to go into the properties of the db and set it to never copy and to put a full file path to the database in the connection string in the apps config file.

Resources