I have an ASP.NET core 2.0 using Entity Framework core on a SQL Server db.
I have to trace and audit all the stuff made by the users on the data. My goal is to have an automatic mechanism writing all what is happening.
For example, if I have the table Animals, I want a parallele table "Audit_animals" where you can find all the info about the data, the operation type (add, delete, edit) and the user who made this.
I already made this time ago in Django + MySQL, but now the environment is different. I found this and it seems interesting, but I'd like to know if there are better ways and which is the best approach to do this in EF Core.
UPDATE
I'm trying this and something happens, but I have some problems.
I added this:
services.AddMvc().AddJsonOptions(options => {
options.SerializerSettings.ReferenceLoopHandling = ReferenceLoopHandling.Ignore;
});
public Mydb_Context(DbContextOptions<isMultiPayOnLine_Context> options) : base(options)
{
Audit.EntityFramework.Configuration.Setup()
.ForContext<Mydb_Context>(config => config
.IncludeEntityObjects()
.AuditEventType("Mydb_Context:Mydb"))
.UseOptOut()
}
public MyRepository(Mydb_Context context)
{
_context = context;
_context.AddAuditCustomField("UserName", "pippo");
}
I also created a table to insert the audits (only one to test this tool), but the only thing I got is what you see in the image. A list of json files with the data I created.... why??
Read the documentation:
Event Output
To configure the output persistence mechanism please see Configuration and Data Providers sections.
Then, in the documentation on Configuration:
If you don't specify a Data Provider, a default FileDataProvider will be used to write the events as .json files into the current working directory. (emphasis mine)
Long and short, follow the documentation to configure the data provider you'd like to use.
If you are going to map the audit table (Audit_Animals) to the same EF context as the audited Animals table, you can use the EntityFramework Data Provider included on the same Audit.EntityFramework library.
Check the documentation here:
Entity Framework Data Provider
If you plan to store the audit logs in
the same database as the audited entities, you can use the
EntityFrameworkDataProvider. Use this if you plan to store the audit
trails for each entity type in a table with similar structure.
There is another library that can audit EF contexts in a similar way, take a look: zzzprojects/EntityFramework-Plus.
Cannot recommend one over the other since they provide different features (and I'm the owner of the audit.net library).
Update:
.NET 6 and Entity Framework Core 6.0 supports SQL Server temporal tables out of the box.
See this answer for examples:
https://stackoverflow.com/a/70017768/3850405
Original:
You could have a look at Temporal tables (system-versioned temporal tables) if you are using SQL Server 2016< or Azure SQL.
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-ver15
From documentation:
Database feature that brings built-in support for providing
information about data stored in the table at any point in time rather
than only the data that is correct at the current moment in time.
Temporal is a database feature that was introduced in ANSI SQL 2011.
There is currently an open issue to support this out of the box:
https://github.com/dotnet/efcore/issues/4693
There are third party options available today but since they are not from Microsoft it is of course a risk that they won't be supported in future versions.
https://github.com/Adam-Langley/efcore-temporal-query
https://github.com/findulov/EntityFrameworkCore.TemporalTables
I solved it like this:
If you use the included Visual Studio 2019 LocalDB (Microsoft SQL Server 2016 (13.1.4001.0 LocalDB) you will need to upgrade if you use cascading DELETE or UPDATE. This is because Temporal tables with cascading actions is not supported in that version.
Complete guide for upgrading here:
https://stackoverflow.com/a/64210519/3850405
Start by adding a new empty migration. I prefer to use Package Manager Console (PMC):
Add-Migration "Temporal tables"
Should look like this:
public partial class Temporaltables : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
}
protected override void Down(MigrationBuilder migrationBuilder)
{
}
}
Then edit the migration like this:
public partial class Temporaltables : Migration
{
List<string> tablesToUpdate = new List<string>
{
"Images",
"Languages",
"Questions",
"Texts",
"Medias",
};
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.Sql($"CREATE SCHEMA History");
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] ADD SysStartTime datetime2(0) GENERATED ALWAYS AS ROW START HIDDEN
CONSTRAINT DF_{table}_SysStart DEFAULT GETDATE(), SysEndTime datetime2(0) GENERATED ALWAYS AS ROW END HIDDEN
CONSTRAINT DF_{table}_SysEnd DEFAULT CONVERT(datetime2 (0), '9999-12-31 23:59:59'),
PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = History.[{table}]));";
migrationBuilder.Sql(alterStatement);
}
}
protected override void Down(MigrationBuilder migrationBuilder)
{
foreach (var table in tablesToUpdate)
{
string alterStatement = $#"ALTER TABLE [{table}] SET (SYSTEM_VERSIONING = OFF);";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP PERIOD FOR SYSTEM_TIME";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP DF_{table}_SysStart, DF_{table}_SysEnd";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"ALTER TABLE [{table}] DROP COLUMN SysStartTime, COLUMN SysEndTime";
migrationBuilder.Sql(alterStatement);
alterStatement = $#"DROP TABLE History.[{table}]";
migrationBuilder.Sql(alterStatement);
}
migrationBuilder.Sql($"DROP SCHEMA History");
}
}
tablesToUpdate should contain every table you need history for.
Then run Update-Database command.
Original source, a bit modified with escaping tables with square brackets etc:
https://intellitect.com/updating-sql-database-use-temporal-tables-entity-framework-migration/
Testing Create, Update and Delete will then show a complete history.
[HttpGet]
public async Task<ActionResult<string>> Test()
{
var identifier1 = "OATestar123";
var identifier2 = "OATestar12345";
var newQuestion = new Question()
{
Identifier = identifier1
};
_dbContext.Questions.Add(newQuestion);
await _dbContext.SaveChangesAsync();
var question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier1);
question.Identifier = identifier2;
await _dbContext.SaveChangesAsync();
question = await _dbContext.Questions.FirstOrDefaultAsync(x => x.Identifier == identifier2);
_dbContext.Entry(question).State = EntityState.Deleted;
await _dbContext.SaveChangesAsync();
return Ok();
}
Tested a few times but the log will look like this:
This solution has a huge advantage IMAO that it is not Object Relational Mapper (ORM) specific and you will even get history if you write plain SQL.
The History tables are also read only by default so less chance of a corrupt audit trail. Error received: Cannot update rows in a temporal history table ''
If you need access to the data you can use your preferred ORM to fetch it or audit via SQL.
Related
When running a procedure on EF Core 3 using FromSqlRaw that updates values in the table, EF DOES NOT return the updated values when I query the database for those changed values.
I have been able to reproduce this behavior. To reproduce create a new console app c# with .net core 3.1.
Copy paste the code below into your main Program.cs file:
using System;
using System.Linq;
using Microsoft.Data.SqlClient;
using Microsoft.EntityFrameworkCore;
namespace EfCoreTest
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
// testing proc
var dbContext = new TestContext();
var tables = dbContext.TestTables.ToList();
var updated = dbContext.TestTables
.FromSqlRaw("execute testProc #Id=#Id, #Comments=#Comments", new object[]
{
new SqlParameter("Id", 1),
new SqlParameter("Comments", "testing comments 2"),
})
.ToList();
var again = dbContext.TestTables.ToList();
}
}
public class TestTable
{
public int TestTableId { get; set; }
public string Comment { get; set; }
}
public class TestContext : DbContext
{
public DbSet<TestTable> TestTables { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlServer(#"Server=localhost\SQLEXPRESS;Database=TestDb;Trusted_Connection=True");
}
}
}
Ensure that the following packages are installed:
Microsoft.EntityFrameworkCore
Microsoft.EntityFrameworkCore.Design
Microsoft.EntityFrameworkCore.SqlServer
Microsoft.EntityFrameworkCore.SqlServer.Design
Change your connection string if necessary.
Run dotnet ef migrations add initial
Run dotnet ef database update
Run the following code in your db:
drop procedure if exists testProc
go
create procedure testProc
#Id int,
#Comments nvarchar(max)
as
begin
update dbo.TestTables
set Comment = #Comments
where TestTableId = #Id;
select * from dbo.TestTables;
end
go
INSERT INTO [dbo].[TestTables]
(Comment) VALUES ('Test Comment');
So when you run the Main program on debug and put a breaker, you'll notice that NONE of the objects return values that were updated by the procedure when go to inspect it. While in debug if you run a select statement on the table you will see that the "Comment" field is indeed updated.
Why is this?
This is not specific to FromSql, but the way EF Core (all versions) tracking queries work.
Here is an excerpt from EF Core How Queries Work documentation topic:
The following is a high level overview of the process each query goes through.
The LINQ query is processed by Entity Framework Core to build a representation that is ready to be processed by the database provider
The result is cached so that this processing does not need to be done every time the query is executed
The result is passed to the database provider
The database provider identifies which parts of the query can be evaluated in the database
These parts of the query are translated to database specific query language (for example, SQL for a relational database)
One or more queries are sent to the database and the result set returned (results are values from the database, not entity instances)
For each item in the result set
If this is a tracking query, EF checks if the data represents an entity already in the change tracker for the context instance
If so, the existing entity is returned
If not, a new entity is created, change tracking is setup, and the new entity is returned
Note the last bullet. What they do is basically an implementation of the so called client wins strategy (as opposed to database wins which you are looking for), and currently there is no way of changing that other than using no-tracking query.
In your example, insert AsNotTracking() somewhere in the queries (before ToList, after dbContext.TestTables - it really doesn't matter because it applies to the whole query), or just
dbContext.ChangeTracker.QueryTrackingBehavior = QueryTrackingBehavior.NoTracking;
and now you'll see the updated values (from your SP call or from other sessions to the same database).
I have been struggling to get the final SAMPLE (ASP.Net, EF Core, SQL) to work against a real SQL Server. Every sample I can find does not use real SQL they always opt for in-memory data store
I changed the connection string
"Data Source=.;Initial Catalog=IS4;Integrated Security=True;"
and ran
dotnet ef database update -c ApplicationDbContext
This created me a SQL database with 25 tables.
I tweaked Startup.cs to change
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(connectionString));
and b.UseSqlite to b.UseSqlServer
.AddConfigurationStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
})
// this adds the operational data from DB (codes, tokens, consents)
.AddOperationalStore(options =>
{
options.ConfigureDbContext = b =>
b.UseSqlServer(connectionString,
sql => sql.MigrationsAssembly(migrationsAssembly));
// this enables automatic token cleanup. this is optional.
options.EnableTokenCleanup = true;
// options.TokenCleanupInterval = 15;
});
I ran the server with "/seed" on the command line but the Seed functionality doesn't work
First it complains CLIENT can't have a NULL ID when it calls SaveChanges(). If I change the code to add the ID
if (!context.Clients.Any())
{
Console.WriteLine("Clients being populated");
int i = 1;
foreach (var client in Config.GetClients().ToList())
{
var x = client.ToEntity();
x.Id = i++;
context.Clients.Add(x);
}
context.SaveChanges();
}
else
{
Console.WriteLine("Clients already populated");
}
I then get
"Cannot insert the value NULL into column 'Id', table 'IS4.dbo.ClientGrantTypes".
When I watch the video's it says it can be migrated from SQLite to full SQL simply by changing the connection string which is obviously not true, given all the other changes I have done, so I must be doing (or missing) something else.
Any thoughts?
Could it be that all the tables with an "Id INT" column should all be IDENTITY columns and they are not!
I checked the migrations code and it has
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "ApiResources",
columns: table => new
{
Id = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true),
Description = table.Column<string>(maxLength: 1000, nullable: true),
DisplayName = table.Column<string>(maxLength: 200, nullable: true),
I am guessing
.Annotation("Sqlite:Autoincrement", true),
doesn't work with full SQL and therefore all the tables need identity properties setting.
Interestingly if you run the other template to add the AdminUI
dotnet new is4admin
It seems to add a couple of SQL scripts
CREATE TABLE "Clients" (
"Id" INTEGER NOT NULL CONSTRAINT "PK_Clients" PRIMARY KEY AUTOINCREMENT,
"AbsoluteRefreshTokenLifetime" INTEGER NOT NULL,
"AccessTokenLifetime" INTEGER NOT NULL,
which does make them identity columns.
I was faced with this issue today and did a couple of searches online and stumbled upon this https://entityframeworkcore.com/knowledge-base/46587067/ef-core---do-sqlserver-migrations-apply-to-sqlite-
The link pointed out to switch the annotation portion in the migration class UP method after
Id = table.Column(nullable: false)
from
.Annotation("Sqlite:Autoincrement", true);
to
.Annotation("SqlServer:ValueGenerationStrategy", SqlServerValueGenerationStrategy.IdentityColumn)
And you will need to import
using Microsoft.EntityFrameworkCore.Metadata;
Then you build, and the migration will be successful.
To resolve this particular issue I used SSMS.
right click on table
select script to drop and create
add IDENTITY after the NOT NULL
Execute
However you are correct, it is using sqlite annotations in the sql file and in the migrations.
To fully resolve this issue, you need to create an implementation of all 3 necessary database contexts: identity, persisted grant, and configuration.
That requires an implementation of design time factories for each of those contexts as well.
Then you can run add-migration in the package manager console for each of those contexts, and then run update database, or run the application with the migrate function when seeding.
So to recap:
Create implementations for the 3 db contexts
Create Design time factory implementations for those db contexts
Add the migrations
Update the database with those migrations
I want to alter the table model during build time in my BuildContributor. Here is some sample code:
using Microsoft.SqlServer.Dac.Deployment;
using Microsoft.SqlServer.Dac.Extensibility;
using Microsoft.SqlServer.Dac.Model;
using System.Collections.Generic;
using System.Linq;
namespace MyNamespace
{
[ExportBuildContributor("MyNamespace.MyBuildContributor", "1.0.0.0")]
public class MyBuildContributor : BuildContributor
{
protected override void OnExecute(BuildContributorContext context, IList<ExtensibilityError> messages)
{
foreach (var table in context.Model.GetObjects(DacQueryScopes.UserDefined, ModelSchema.Table))
{
var tableName = table.Name.Parts.Last();
var rowId = "alter table " + tableName + " add rowid uniqueidentifier";
context.Model.AddObjects(rowId);
}
}
}
}
The build succeeds with no errors but I don't see rowid in any of the tables when I go look in the model.xml file in bin\Debug\MyDb.dacpac.
You can't use Model.AddObjects in this context.
Model.AddObjects from (https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.dac.model.tsqlmodel.addobjects(v=sql.120).aspx#M:Microsoft.SqlServer.Dac.Model.TSqlModel.AddObjects(System.String)):
"Adds objects to the model based on the contents of a TSql Script string. The script should consist of valid TSql DDL statements. Objects added using this method cannot be updated or deleted at a later point as update/delete requires a script name to be specified when adding the objects. If this is a requirement use the AddOrUpdateObjects method instead."
I.E it can only add objects like tables or stored procedure, columns by themselves aren't added to the model.
If you want to update an existing object (i.e. to add a column to an existing table) you will need to use "TSqlModel.AddOrUpdateObjects" which also takes a script name. You can get the script name from a build contributor by using:
var sourceName = table.GetSourceInformation().SourceName;
Then you can build the updated script you want (just a rough outline of rebuilding the SQL for stack overflow, I'm sure you can do better):
var sql = table.GetScript();
sql = sql.Trim().TrimEnd(')', ';') + ", rowid uniqueidentifier);";
var sourceName = table.GetSourceInformation().SourceName;
model.AddOrUpdateObjects(sql, sourceName, new TSqlObjectOptions());
There are a few ways you could create your new script but basically what you need is a new script which has your extra column and the original table definition which you can pass to AddorUpdateObjects to overwrite the original create table statement.
If you don't get a source to use in AddorUpdateObjects then maybe you could use a post-deploy script to add it to any table you need and then use a deployment contributor to remove the drop column step.
You could also look at using a deployment contributor instead to add the new column step to that.
Hope it helps! Let me know how you get on :)
We're having a strange problem in Oracle. I'll sketch some (simplified) context first:
Consider this mapping to an Entity:
public EntityMap()
{
Table("EntityTable");
Id(x => x.Id)
.Column("entityID")
.GeneratedBy.Native("ENTITYID").UnsavedValue(0);
Map(x => x.SomeBoolean).Column("SomeBoolean");
}
and this code:
var entity = new Entity();
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
Session.Save(entity);
transaction.Complete();
}
//A lot of code
if(someCondition)
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required))
{
enitity.SomeBoolean = true;
Session.Update(entity);
transaction.Complete();
}
}
This code is called a few times. The first time it generates the following queries:
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1216)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1216
The second time it is called these queries are generated (someCondition is false)
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1217)
And now the trouble begins. From now on, each insert will use the correct autoincremented value, but the update will always use 1217
select ENTITYID.nextval from dual
INSERT INTO Entity
(SomeBoolean, EntityID)
VALUES (0, 1218)
UPDATE Entity
SET SomeBoolean = 1
WHERE EntityID = 1217
And of course, this is not what we want to happen. If I inspect the value of the Id while debugging, it contains the correct autoincremented value. Somehow, deep in the bowels of NHibernate, the incorrect is is assigned to the WHERE clause...
The strange part is that this only happens on Oracle. If I switch NHibernate to MsSql, everything works like a charm.
So I found out what happened. NHibernate changed it's default connection release mode between versions 1.x and 2.x. Instead of closing the connection when the session is Disposed, the connections is now closed after each transaction. However, we were manually coordinating our transactions which apparently caused troubles in Oracle.
This question has some extra information and this entry in the NHibernate documentation also clarifies how the connections are handeled:
As of NHibernate, if your application manages transactions through .NET APIs such as System.Transactions library, ConnectionReleaseMode.AfterTransaction may cause NHibernate to open and close several connections during one transaction, leading to unnecessary overhead and transaction promotion from local to distributed. Specifying ConnectionReleaseMode.OnClose will revert to the legacy behavior and prevent this problem from occuring.
This blog post is what got me looking in the right direction.
I am creating an asp.net mvc4 site using entity framework 5 with codefirst and sql server express 2012.
I have enabled migrations and now do this in my Configuration.Seed method:
(note that I want to set the primary key to 8 even though this is the first record in the database).
context.ProductCategoryDtoes.AddOrUpdate(x => x.Id,
new ProductCategoryDto() { Id = 8, Name = "category1" }
);
My Model object is defined like this:
[Table("ProductCategory")]
public class ProductCategoryDto {
public long Id { get; set; }
public string Name { get; set; }
}
This results in a table in (SQL SERVER EXPRESS 2012) where the Id column has Identity = true, Identity seed = 1, identity increment = 1.
Now when I run migrations by doing an PM> Update-Database this result in a row with Id = 1.
So my question are:
1) How can I control the values of auto incremented primary keys when seeding data.
2) If the solution is to increment the key columns seed value, then how is this to be done when I am using Database.SetInitializer(new DropCreateDatabaseAlways<MyContext>());. This will nuke and rebuild the database everytime I update the database, so how would the seed value be updated in the fresh database?
Just create dummy entities with default values, then add your real data and afterwards delete the dummies. Not the best way but I guess there is no other...
Have you tried adding this on top of your Id property:
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public long Id { get; set; }
It seems you are trying to defeat the purpose of an identity column. If you want to do this your only choice is to use SQL Commands Set IDENTITY_INSERT to allow you to insert the value and then run DBCC CHECKIDENT to update the seed. Not a really good idea. These options have security and performance limitations.
You may want to consider using a GUID instead. You can create GUIDs in code which are guaranteed to be unique, and you can also generate GUIDs in SQL as a column default.
With GUIDs, which are non sequential you will need to think through a good indexing strategy. This approach is also debatable.
Ultimately, it looks like you need a different strategy other than using an Identity Column.
It is very hackish, but I ran into a scenario where I had to do it due to some report having hard-coded PK values. Fixing the reports was beyond my scope of work.
Context.Database.ExecuteSqlCommand("SET IDENTITY_INSERT dbo.ProductCategoryDto ON " +
"INSERT INTO dbo.ProductCategoryDto (Id, Name) VALUES (8, 'category1') " +
"SET IDENTITY_INSERT dbo.ProductCategoryDto OFF");