How to switch between DatabaseGeneratedOption.Identity, Computed and None at runtime without having to generate empty DbMigrations - sql-server

I am migrating a legacy database to a new database which we need to access and "manage" (as oxymoronic as it might sound) primarily through Entity Framework Code-First.
We are using MS SQL Server 2014.
The legacy database contained some tables with computed columns. Typical GUID and DateTime stuff.
Technically speaking, these columns did not have a computed column specification, but rather where given a default value with NEWID() and GETDATE()
We all know that it is very easy to configure the DbContext to deal with those properties as follows:
modelBuilder.Entity<Foo>()
.Property(t => t.Guid)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
modelBuilder.Entity<Bar>()
.Property(t => t.DTS)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
The above would instruct the Entity Framework to ignore submitting any supplied values for such properties during INSERTs and UPDATEs.
But now we need to allow for import of legacy records and maintain the OLD values, including the PRIMARY KEY, which is marked as IDENTITY
This means we would have to set the Id, Guid and DTS properties to DatabaseGeneratedOption.None while inserting those records.
For the case of Id, we would have to somehow execute SET IDENTITY_INSERT ... ON/OFF within the connection session.
And we want to do this
importing process via Code-First as well.
If I modify the model and "temporarily" and set those properties to DatabaseGeneratedOption.None after the database has been created, we would get the typical:
The model backing the context has changed since the database was created. Consider using Code First Migrations to update the database.
I understand that we could generate an empty coded-migration with -IgnoreChanges so as to "establish" this latest version of the context, but this wouldn't be an acceptable strategy as we would have to be run empty migrations back-and-forth solely for this purpose.
Half an answer:
We have considered giving these properties nullable types, i.e.
public class Foo
{
...
public Guid? Guid { get; set; }
}
public class Bar
{
...
public DateTime? DTS { get; set; }
}
While caring about the default values in an initial DbMigration:
CreateTable(
"dbo.Foos",
c => new
{
Id = c.Int(nullable: false, identity: true),
Guid = c.Guid(nullable: false, defaultValueSql: "NEWID()"),
})
.PrimaryKey(t => t.Id);
CreateTable(
"dbo.Bars",
c => new
{
Id = c.Int(nullable: false, identity: true),
DTS = c.Guid(nullable: false, defaultValueSql: "GETDATE()"),
})
.PrimaryKey(t => t.Id);
The Question:
But the question remains: Is there a way to switch between DatabaseGeneratedOption.Identity, DatabaseGeneratedOption.Computed and DatabaseGeneratedOption.None at runtime?
At the very least, how could we turn DatabaseGeneratedOption.Identity on/off at runtime?

A certain amount of the configuration of the context is always going to be dependent on the runtime environment - for example, proxy generation and validation. As such, runtime configuration of the Entity Framework DbContext is something I leverage quite heavily.
Although I've never used this approach to switch the configuration of the context on a per use-case basis, I see no reason why this would not work.
In its simplest form, this can be achieved by having a set of EntityTypeConfiguration classes for each environment. Each configuration set is then wired to the DbContext on a per-environment basis. Again, in its simplest form this could be achieved by having a DbContext type per environment. In your case, this would be per use-case.
Less naively, I usually encapsulate the configuration of the context in an environment-specific unit of work. For example, the unit of work for an Asp.Net environment has an underlying DbContext configured to delegate validation to the web framework, as well as to turn off proxy generation to prevent serialisation issues. I imagine this approach would have similar usefulness to your problem.
For example (using brute force code):
// Foo Configuration which enforces computed columns
public class FooConfiguration : EntityTypeConfiguration<Foo>
{
public FooConfiguration()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
}
}
// Foo configuration that allows computed columns to be overridden
public class FooConfiguration2 : EntityTypeConfiguration<Foo>
{
public FooConfiguration2()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
}
}
// DbContext that enforces computed columns
public class MyContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration());
}
}
// DbContext that allows computed columns to be overridden
public class MyContext2 : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration2());
}
}
This can obviously be tidied up - we usually use a combination of factory and strategy patterns to encapsulate the creation of a runtime specific context. In combination with a DI container this allows the correct set up configuration classes to be injected on a per-environment basis.
Example usage:
[Fact]
public void CanConfigureContextAtRuntime()
{
// Enforce computed columns
using (var context = new EfContext())
{
var foo1 = new Foo();
context.Foos.Add(foo1);
context.SaveChanges();
}
// Allow overridden computed columns
using (var context = new EfContext2())
{
var foo2 = new Foo { DateTime = DateTime.Now.AddYears(-3) };
context.Foos.Add(foo2);
context.SaveChanges();
}
// etc
}

Related

Not Nullable fields in SQL Server Still Considered Required Fields in ASP.NET MVC

I have a non nullable fields in a table with default values set already in the property "Default Value or Binding" in SSMS. I linked this table in an ASP.Net mvc application. When I created the view and when running the create view, it still asking me to enter the required fields for the non nullable fields even though i assigned a default value for them.
After this I removed the line:
#Html.ValidationMessageFor(model => model.position, "", new { #class = "text-danger" })
which is bellow each
#Html.EditorFor statement, but this time it post me back to the same page with no changes in the database.
How can I get rid of the message in the required fields as I have already default value for them?
Simply you can create a constructor in your model. It will initialize default values once new instance is created. If user provides that fields, then it will be overridden. If no, value from constructor will be passed to EF.
What you are trying to do now, won't work according to specifications: https://msdn.microsoft.com/en-us/library/ms187872.aspx
Here you can see how to achieve what you want, considering that this field will always be generate in database.
This is one of the many reasons not to use entity models as viewmodels. MVC doesn't care that the required field has a default value, that information is database-related and not related to input validation.
Introduce a viewmodel where those properties are not required, and map the posted viewmodel to your entity model.
So, given an entity model that looks like this:
public class SomeEntity
{
// this property is not-nullable in the database
[Required]
public string SomeRequiredDatabaseField { get; set; }
}
Because the SomeRequiredDatabaseField is NOT NULL, it is annotated as such by Entity Framework, even if it has a default value. MVC will pick up this annotation, and consider the model not valid when the property has no value.
Now if you introduce a viewmodel, you can tell MVC that this property is not required:
public class SomeViewModel
{
// Not required
public string SomeRequiredDatabaseField { get; set; }
}
Now in your controller, you map the viewmodel to the entity model (preferably using AutoMapper):
public ActionResult Post(SomeViewModel model)
{
if (ModelState.IsValid)
{
var entityToSave = new SomeEntity
{
SomeRequiredDatabaseField = model.SomeRequiredDatabaseField
};
db.SomeEntity.Add(entityToSave);
}
// ...
}

How do I map a column to uppercase in .NET 4.5 C# Entity Framework 6 using both Oracle and SQL Server?

I'm using C#, .NET 4.5 and Entity Framework 6 in my project. It uses both Oracle and SQL Server, depending on the installation at the client.
The approach is database-first, as this database existed already by the time we decided to change the ORM from NHibernate to Entity Framework 6.
The mapping looks like this:
ToTable(schema + ".Motorista");
Property(x => x.Criacao).HasColumnName("criacao").IsOptional();
The table and column names are all in PascalCase in the mapping, which works fine with SQL Server but, in Oracle, all the names are UpperCase which causes an error:
ORA-00942: table or view does not exist
If I manually make it uppercase, then it works fine on Oracle. But I can't do that because of compatibility to SQL Server.
How can I say to Entity Framework to uppercase all the names when using Oracle?
Can I use conventions in this scenario?
When the database names (tables and columns) are equal to the class and property names in the class model it's very easy to introduce custom code-first conventions:
In the context's OnModelCreating overload you can add these lines to add conventions how table and column names will be derived from the class and property names, respectively:
modelBuilder.Types().Configure
(c => c.ToTable(c.ClrType.Name.ToUpper(), schema));
modelBuilder.Properties().Configure
(c => c.HasColumnName(c.ClrPropertyInfo.Name.ToUpper()));
Of course you should do this conditionally, i.e. when connecting to Oracle. For instance by checking a global constant like OnOracle that you could set by
ConfigurationManager.ConnectionStrings[0].ProviderName
== "System.Data.OracleClient"
on application start up.
Check the providerName attribute in the named connection string to see if your connection is for SQL Server or Oracle (OR add a redundant value in the appSettings section of the configuration). Then do what #AaronLS suggested and add a helper method to case your names correctly and apply any additional formatting. The helper method should be tasked with checking the database type as mentioned above and applying or not applying casing/formatting.
Here is an example.
public class MyDbContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new SomeMappedTypeMapper());
base.OnModelCreating(modelBuilder);
}
}
public class SomeMappedType
{
public int SomeMappedColumnId { get; set; }
public string SomeMappedColumn { get; set; }
}
public class SomeMappedTypeMapper : EntityTypeConfiguration<SomeMappedType>
{
public SomeMappedTypeMapper()
{
this.HasKey(x => x.SomeMappedColumnId);
this.ToTable("SomeMappedType"); // If needed, apply the same technique as used in the column name extension
this.Property(x => x.SomeMappedColumnId).HasColumnNameV2("SomeMappedColumnId").HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
this.Property(x => x.SomeMappedColumn).HasColumnNameV2("SomeMappedColumn");
}
}
public static class TypeHelper
{
private static bool isOracle;
static TypeHelper()
{
isOracle = System.Configuration.ConfigurationManager.ConnectionStrings["yourDbConnectionName"].ProviderName.IndexOf("oracle", StringComparison.OrdinalIgnoreCase) >= 0;
}
public static PrimitivePropertyConfiguration HasColumnNameV2(this PrimitivePropertyConfiguration property, string columnName)
{
if (isOracle)
return property.HasColumnName(columnName.ToUpper());
return property.HasColumnName(columnName);
}
}
This link is in EF CORE but it may help you, this converts ToUpper, but you can change ToLower, you can also use the Nuget ** Humanizer ** for another type of capitalize.
Import that file into your project and use it like this.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.ToUpperCaseTables();
modelBuilder.ToUpperCaseColumns();
// ...
}
Consider a table called "Person" with a column called "Name" in SQL Server but in oracle the table is called "PERSON" with a column called "NAME".
We were able to use our models generated against sql server on our oracle database by adding the following code to the DBContext classe's OnModelCreating
modelBuilder.Entity<Person>()
.HasEntitySetName("Person")
.ToTable("PERSON");
modelBuilder.Entity<Person>()
.Property(t => t.Name)
.HasColumnName("NAME");

DbSet.Load() method is too slow

I have an SQLite database, which contains one table named "Main". Each record of this table contains only two fields: ID (integer, primary key) and name (string). There are 100 records in the database.
Using Entity Framework Power Tools I've created the Code First model from the existing database. The model is rather simple:
// MainMap.cs
public class MainMap : EntityTypeConfiguration<Main>
{
public MainMap()
{
// Primary Key
this.HasKey(t => t.ID);
// Properties
this.Property(t => t.name)
.IsRequired()
.HasMaxLength(50);
// Table & Column Mappings
this.ToTable("Main");
this.Property(t => t.ID).HasColumnName("ID");
this.Property(t => t.name).HasColumnName("name");
}
}
// Main.cs
public partial class Main
{
public long ID { get; set; }
public string name { get; set; }
}
// mainContext.cs
public partial class mainContext : DbContext
{
static mainContext()
{
Database.SetInitializer<mainContext>(null);
}
public mainContext()
: base("Name=mainContext")
{
}
public DbSet<Main> Mains { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new MainMap());
}
}
Now I'm trying to get the records from the database:
mainContext context = new mainContext();
context.Mains.Load();
Now I can use context.Mains.Local with a comfort for different purposes (actually, I bind it to ListView's ItemsSource).
The problem is that context.Main.Load() line executes for about 2.7 seconds. I think, it is too much time as for retrieving about 100 records from a simple database. Although, I'm a newcomer to databases, so, maybe I'm not right and 2.7 seconds is rather suitable period of time. My CPU is Intel i3-3220 (2x3.30 GHz), Entity Framework's version is 6.0.
Maybe, my Code First model is poor, or maybe EF doesn't provide high performance, or maybe there is no need to call Load() method to obtain records (but if I don't call it, context.Mains.Local is empty).
So, how can I increase the performance of getting the records from the database?
Any help and hints will be appreciated.
i ran some tests with both SQLite and SQL Server. on my laptop (corei7 2630QM 2.00GHZ & win7 64bit) the load time for both was ~1.5sec.
then i tried to warm it up with something like
context.Database.Exists();
and the time reduced to ~700ms for both.
i used "Prefer 32-bit" and "Optimize code" options in build tab of the project properties. these options produced best results.
try these and see if the load time changes.

DateCreated or Modified Column - Entity Framework or using triggers on SQL Server

After I read one question in attached link, I got a sense of how to set DateCreated and DateModified columns in Entity Framework and use it in my application. In the old SQL way though, the trigger way is more popular because is more secure from DBA point of view.
So any advice on which way is the best practice? should it be set in entity framework for the purpose of application integrity? or should use trigger as it make more sense from data security point of view? Or is there a way to compose trigger in entity framework? Thanks.
EF CodeFirst: Rails-style created and modified columns
BTW, even though it doesn't matter much, I am building this app using ASP.NET MVC C#.
Opinion: Triggers are like hidden behaviour, unless you go looking for them you usually won't realise they are there. I also like to keep the DB as 'dumb' as possible when using EF, since I'm using EF so my team wont need to maintain SQL code.
For my solution (mix of ASP.NET WebForms and MVC in C# with Business Logic in another project that also contains the DataContext):
I recently had a similar issue, and although for my situation it was more complex (DatabaseFirst, so required a custom TT file), the solution is mostly the same.
I created an interface:
public interface ITrackableEntity
{
DateTime CreatedDateTime { get; set; }
int CreatedUserID { get; set; }
DateTime ModifiedDateTime { get; set; }
int ModifiedUserID { get; set; }
}
Then I just implemented that interface on any entities I needed to (because my solution was DatabaseFirst, I updated the TT file to check if the table had those four columns, and if so added the interface to the output).
UPDATE: here's my changes to the TT file, where I updated the EntityClassOpening() method:
public string EntityClassOpening(EntityType entity)
{
var trackableEntityPropNames = new string[] { "CreatedUserID", "CreatedDateTime", "ModifiedUserID", "ModifiedDateTime" };
var propNames = entity.Properties.Select(p => p.Name);
var isTrackable = trackableEntityPropNames.All(s => propNames.Contains(s));
var inherits = new List<string>();
if (!String.IsNullOrEmpty(_typeMapper.GetTypeName(entity.BaseType)))
{
inherits.Add(_typeMapper.GetTypeName(entity.BaseType));
}
if (isTrackable)
{
inherits.Add("ITrackableEntity");
}
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1}partial class {2}{3}",
Accessibility.ForType(entity),
_code.SpaceAfter(_code.AbstractOption(entity)),
_code.Escape(entity),
_code.StringBefore(" : ", String.Join(", ", inherits)));
}
The only thing left was to add the following to my partial DataContext class:
public override int SaveChanges()
{
// fix trackable entities
var trackables = ChangeTracker.Entries<ITrackableEntity>();
if (trackables != null)
{
// added
foreach (var item in trackables.Where(t => t.State == EntityState.Added))
{
item.Entity.CreatedDateTime = System.DateTime.Now;
item.Entity.CreatedUserID = _userID;
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
// modified
foreach (var item in trackables.Where(t => t.State == EntityState.Modified))
{
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
}
return base.SaveChanges();
}
Note that I saved the current user ID in a private field on the DataContext class each time I created it.
As for DateCreated, I would just add a default constraint on that column set to SYSDATETIME() that takes effect when inserting a new row into the table.
For DateModified, personally, I would probably use triggers on those tables.
In my opinion, the trigger approach:
makes it easier; I don't have to worry about and remember every time I save an entity to set that DateModified
makes it "safer" in that it will also apply the DateModified if someone finds a way around my application to modify data in the database directly (using e.g. Access or Excel or something).
Entity Framework 6 has interceptors which can be used to set created and modified. I wrote an article how to do it: http://marisks.net/2016/02/27/entity-framework-soft-delete-and-automatic-created-modified-dates/
I agree with marc_s - much safer to have the trigger(s) in the database. In my company's databases, I require each field to have a Date_Modified, Date_Created field, and I even have a utility function to automatically create the necessary triggers.
When using with Entity Framework, I found I needed to use the [DatabaseGenerated] annotation with my POCO classes:
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Modified { get; set; }
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Created { get; set; }
I was attempting to use stored procedure mapping on an entity, and EF was creating #Date_Modified, #Date_Created parameters on my insert/update sprocs getting the error
Procedure or function has too many arguments specified.
Most of the examples show using [NotMapped], which will allow select/insert to work, but then those fields will not show up when that entity is loaded!
Alternately you can just make sure any sprocs contain the #Date_Modified, #Date_Created parameters, but this goes against the design of using triggers in the first place.

TooManyRowsAffectedException with encrypted triggers

I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them.
Is there another way to get around the TooManyRowsAffectedException that is thrown on commit?
Update 1
So far only way I've gotten around the issue is to step around the .Save routine with
var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order");
query.SetString("Notes", orderHeader.Notes);
query.SetString("Status", orderHeader.OrderStatus);
query.SetInt32("Order", orderHeader.OrderHeaderId);
query.ExecuteUpdate();
It feels dirty and is not easily to extend, but it doesn't crater.
We had the same problem with a 3rd party Sybase database. Fortunately, after some digging into the NHibernate code and brief discussion with the developers, it seems that there is a straightforward solution that doesn't require changes to NHibernate. The solution is given by Fabio Maulo in this thread in the NHibernate developer group.
To implement this for Sybase we created our own implementation of IBatcherFactory, inherited from NonBatchingBatcher and overrode the AddToBatch() method to remove the call to VerifyOutcomeNonBatched() on the provided IExpectation object:
public class NonVerifyingBatcherFactory : IBatcherFactory
{
public virtual IBatcher CreateBatcher(ConnectionManager connectionManager, IInterceptor interceptor)
{
return new NonBatchingBatcherWithoutVerification(connectionManager, interceptor);
}
}
public class NonBatchingBatcherWithoutVerification : NonBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
public override void AddToBatch(IExpectation expectation)
{
IDbCommand cmd = CurrentCommand;
ExecuteNonQuery(cmd);
// Removed the following line
//expectation.VerifyOutcomeNonBatched(rowCount, cmd);
}
}
To do the same for SQL Server you would need to inherit from SqlClientBatchingBatcher, override DoExectuteBatch() and remove the call to VerifyOutcomeBatched() from the Expectations object:
public class NonBatchingBatcherWithoutVerification : SqlClientBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
protected override void DoExecuteBatch(IDbCommand ps)
{
log.DebugFormat("Executing batch");
CheckReaders();
Prepare(currentBatch.BatchCommand);
if (Factory.Settings.SqlStatementLogger.IsDebugEnabled)
{
Factory.Settings.SqlStatementLogger.LogBatchCommand(currentBatchCommandsLog.ToString());
currentBatchCommandsLog = new StringBuilder().AppendLine("Batch commands:");
}
int rowsAffected = currentBatch.ExecuteNonQuery();
// Removed the following line
//Expectations.VerifyOutcomeBatched(totalExpectedRowsAffected, rowsAffected);
currentBatch.Dispose();
totalExpectedRowsAffected = 0;
currentBatch = new SqlClientSqlCommandSet();
}
}
Now you need to inject your new classes into NHibernate. There are at two ways to do this that I am aware of:
Provide the name of your IBatcherFactory implementation in the adonet.factory_class configuration property
Create a custom driver that implements the IEmbeddedBatcherFactoryProvider interface
Given that we already had a custom driver in our project to work around Sybase 12 ANSI string problems it was a straightforward change to implement the interface as follows:
public class DriverWithCustomBatcherFactory : SybaseAdoNet12ClientDriver, IEmbeddedBatcherFactoryProvider
{
public Type BatcherFactoryClass
{
get { return typeof(NonVerifyingBatcherFactory); }
}
//...other driver code for our project...
}
The driver can be configured by providing the driver name using the connection.driver_class configuration property. We wanted to use Fluent NHibernate and it can be done using Fluent as follows:
public class SybaseConfiguration : PersistenceConfiguration<SybaseConfiguration, SybaseConnectionStringBuilder>
{
SybaseConfiguration()
{
Driver<DriverWithCustomBatcherFactory>();
AdoNetBatchSize(1); // This is required to use our new batcher
}
/// <summary>
/// The dialect to use
/// </summary>
public static SybaseConfiguration SybaseDialect
{
get
{
return new SybaseConfiguration()
.Dialect<SybaseAdoNet12Dialect>();
}
}
}
and when creating the session factory we use this new class as follows:
var sf = Fluently.Configure()
.Database(SybaseConfiguration.SybaseDialect.ConnectionString(_connectionString))
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<MyEntity>())
.BuildSessionFactory();
Finally you need to set the adonet.batch_size property to 1 to ensure that your new batcher class is used. In Fluent NHibernate this is done using the AdoNetBatchSize() method in a class that inherits from PersistenceConfiguration (see the SybaseConfiguration class constructor above for an example of this).
er... you might be able to decrypt them...
Edit: if you can't change code, decrypt, or disable then you have no code options on the SQL Server side.
However, You could try "disallow results from triggers Option" which is OK for SQL 2005 and SQL 2008 but will be removed in later versions. I don't know if it suppresses rowcount messages though.
Setting the "Disallow Results from Triggers" option to 1 worked for us (the default is 0).
Note that this option will not be available in a future releases of Microsoft SQL Server, but after it is no longer available it will behave as if it was set to 1. So setting this to 1 now fixes the problem and also give you the same behavior as will be in future releases.

Resources