Exception Thrown When Generating Migration for JSONB Column with Npgsql and EF Core - npgsql

This exception is thrown when I try to generate a migration with Npgsql and EF Core.
The property 'MyClass.MyProperty' is of type 'IDictionary<string, object>' which is not supported by current database provider. Either change the property CLR type or ignore the property using the '[NotMapped]' attribute or by using 'EntityTypeBuilder.Ignore' in 'OnModelCreating'.
Just for debugging purposes, I set the following HasConversion code (using System.Text.Json) which got rid of the exception.
eb.Property(a => a.MyProperty).HasConversion(
b => JsonSerializer.Serialize(x, new JsonSerializerOptions()),
b => JsonSerializer.Deserialize<IDictionary<string, object>>(x, new JsonSerializerOptions())
).HasColumnType("jsonb");
I could replace the two faux conversion functions with Newtonsoft JSON functions, but it doesn't seem like the best solution. Does Npgsql EF Core have a built in function for EF Core's .HasConversion function?
class MyClass {
public Dictionary<string, object> MyProperty { get; set; }
}

Related

How do I write EF.Functions extension method?

I see that EF Core 2 has EF.Functions property EF Core 2.0 Announcement which can be used by EF Core or providers to define methods that map to database functions or operators so that those can be invoked in LINQ queries. It included LIKE method that gets sent to the database.
But I need a different method, SOUNDEX() that is not included. How do I write such a method that passes the function to the database the way DbFunction attribute did in EF6? Or I need to wait for MS to implement it? Essentially, I need to generate something like
SELECT * FROM Customer WHERE SOUNDEX(lastname) = SOUNDEX(#param)
Adding new scalar method to EF.Functions is easy - you simply define extension method on DbFunctions class. However providing SQL translation is hard and requires digging into EFC internals.
However EFC 2.0 also introduces a much simpler approach, explained in Database scalar function mapping section of the New features in EF Core 2.0 documentation topic.
According to that, the easiest would be to add a static method to your DbContext derived class and mark it with DbFunction attribute. E.g.
public class MyDbContext : DbContext
{
// ...
[DbFunction("SOUNDEX")]
public static string Soundex(string s) => throw new Exception();
}
and use something like this:
string param = ...;
MyDbContext db = ...;
var query = db.Customers
.Where(e => MyDbContext.Soundex(e.LastName) == MyDbContext.Soundex(param));
You can declare such static methods in a different class, but then you need to manually register them using HasDbFunction fluent API.
EFC 3.0 has changed this process a little, as per https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-3.0/breaking-changes#udf-empty-string
Example of adding CHARINDEX in a partial context class:
public partial class MyDbContext
{
[DbFunction("CHARINDEX")]
public static int? CharIndex(string toSearch, string target) => throw new Exception();
partial void OnModelCreatingPartial(
ModelBuilder modelBuilder)
{
modelBuilder
.HasDbFunction(typeof(MyDbContext).GetMethod(nameof(CharIndex)))
.HasTranslation(
args =>
SqlFunctionExpression.Create("CHARINDEX", args, typeof(int?), null));
}
}

How do I map a column to uppercase in .NET 4.5 C# Entity Framework 6 using both Oracle and SQL Server?

I'm using C#, .NET 4.5 and Entity Framework 6 in my project. It uses both Oracle and SQL Server, depending on the installation at the client.
The approach is database-first, as this database existed already by the time we decided to change the ORM from NHibernate to Entity Framework 6.
The mapping looks like this:
ToTable(schema + ".Motorista");
Property(x => x.Criacao).HasColumnName("criacao").IsOptional();
The table and column names are all in PascalCase in the mapping, which works fine with SQL Server but, in Oracle, all the names are UpperCase which causes an error:
ORA-00942: table or view does not exist
If I manually make it uppercase, then it works fine on Oracle. But I can't do that because of compatibility to SQL Server.
How can I say to Entity Framework to uppercase all the names when using Oracle?
Can I use conventions in this scenario?
When the database names (tables and columns) are equal to the class and property names in the class model it's very easy to introduce custom code-first conventions:
In the context's OnModelCreating overload you can add these lines to add conventions how table and column names will be derived from the class and property names, respectively:
modelBuilder.Types().Configure
(c => c.ToTable(c.ClrType.Name.ToUpper(), schema));
modelBuilder.Properties().Configure
(c => c.HasColumnName(c.ClrPropertyInfo.Name.ToUpper()));
Of course you should do this conditionally, i.e. when connecting to Oracle. For instance by checking a global constant like OnOracle that you could set by
ConfigurationManager.ConnectionStrings[0].ProviderName
== "System.Data.OracleClient"
on application start up.
Check the providerName attribute in the named connection string to see if your connection is for SQL Server or Oracle (OR add a redundant value in the appSettings section of the configuration). Then do what #AaronLS suggested and add a helper method to case your names correctly and apply any additional formatting. The helper method should be tasked with checking the database type as mentioned above and applying or not applying casing/formatting.
Here is an example.
public class MyDbContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new SomeMappedTypeMapper());
base.OnModelCreating(modelBuilder);
}
}
public class SomeMappedType
{
public int SomeMappedColumnId { get; set; }
public string SomeMappedColumn { get; set; }
}
public class SomeMappedTypeMapper : EntityTypeConfiguration<SomeMappedType>
{
public SomeMappedTypeMapper()
{
this.HasKey(x => x.SomeMappedColumnId);
this.ToTable("SomeMappedType"); // If needed, apply the same technique as used in the column name extension
this.Property(x => x.SomeMappedColumnId).HasColumnNameV2("SomeMappedColumnId").HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
this.Property(x => x.SomeMappedColumn).HasColumnNameV2("SomeMappedColumn");
}
}
public static class TypeHelper
{
private static bool isOracle;
static TypeHelper()
{
isOracle = System.Configuration.ConfigurationManager.ConnectionStrings["yourDbConnectionName"].ProviderName.IndexOf("oracle", StringComparison.OrdinalIgnoreCase) >= 0;
}
public static PrimitivePropertyConfiguration HasColumnNameV2(this PrimitivePropertyConfiguration property, string columnName)
{
if (isOracle)
return property.HasColumnName(columnName.ToUpper());
return property.HasColumnName(columnName);
}
}
This link is in EF CORE but it may help you, this converts ToUpper, but you can change ToLower, you can also use the Nuget ** Humanizer ** for another type of capitalize.
Import that file into your project and use it like this.
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.ToUpperCaseTables();
modelBuilder.ToUpperCaseColumns();
// ...
}
Consider a table called "Person" with a column called "Name" in SQL Server but in oracle the table is called "PERSON" with a column called "NAME".
We were able to use our models generated against sql server on our oracle database by adding the following code to the DBContext classe's OnModelCreating
modelBuilder.Entity<Person>()
.HasEntitySetName("Person")
.ToTable("PERSON");
modelBuilder.Entity<Person>()
.Property(t => t.Name)
.HasColumnName("NAME");

How to switch between DatabaseGeneratedOption.Identity, Computed and None at runtime without having to generate empty DbMigrations

I am migrating a legacy database to a new database which we need to access and "manage" (as oxymoronic as it might sound) primarily through Entity Framework Code-First.
We are using MS SQL Server 2014.
The legacy database contained some tables with computed columns. Typical GUID and DateTime stuff.
Technically speaking, these columns did not have a computed column specification, but rather where given a default value with NEWID() and GETDATE()
We all know that it is very easy to configure the DbContext to deal with those properties as follows:
modelBuilder.Entity<Foo>()
.Property(t => t.Guid)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
modelBuilder.Entity<Bar>()
.Property(t => t.DTS)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
The above would instruct the Entity Framework to ignore submitting any supplied values for such properties during INSERTs and UPDATEs.
But now we need to allow for import of legacy records and maintain the OLD values, including the PRIMARY KEY, which is marked as IDENTITY
This means we would have to set the Id, Guid and DTS properties to DatabaseGeneratedOption.None while inserting those records.
For the case of Id, we would have to somehow execute SET IDENTITY_INSERT ... ON/OFF within the connection session.
And we want to do this
importing process via Code-First as well.
If I modify the model and "temporarily" and set those properties to DatabaseGeneratedOption.None after the database has been created, we would get the typical:
The model backing the context has changed since the database was created. Consider using Code First Migrations to update the database.
I understand that we could generate an empty coded-migration with -IgnoreChanges so as to "establish" this latest version of the context, but this wouldn't be an acceptable strategy as we would have to be run empty migrations back-and-forth solely for this purpose.
Half an answer:
We have considered giving these properties nullable types, i.e.
public class Foo
{
...
public Guid? Guid { get; set; }
}
public class Bar
{
...
public DateTime? DTS { get; set; }
}
While caring about the default values in an initial DbMigration:
CreateTable(
"dbo.Foos",
c => new
{
Id = c.Int(nullable: false, identity: true),
Guid = c.Guid(nullable: false, defaultValueSql: "NEWID()"),
})
.PrimaryKey(t => t.Id);
CreateTable(
"dbo.Bars",
c => new
{
Id = c.Int(nullable: false, identity: true),
DTS = c.Guid(nullable: false, defaultValueSql: "GETDATE()"),
})
.PrimaryKey(t => t.Id);
The Question:
But the question remains: Is there a way to switch between DatabaseGeneratedOption.Identity, DatabaseGeneratedOption.Computed and DatabaseGeneratedOption.None at runtime?
At the very least, how could we turn DatabaseGeneratedOption.Identity on/off at runtime?
A certain amount of the configuration of the context is always going to be dependent on the runtime environment - for example, proxy generation and validation. As such, runtime configuration of the Entity Framework DbContext is something I leverage quite heavily.
Although I've never used this approach to switch the configuration of the context on a per use-case basis, I see no reason why this would not work.
In its simplest form, this can be achieved by having a set of EntityTypeConfiguration classes for each environment. Each configuration set is then wired to the DbContext on a per-environment basis. Again, in its simplest form this could be achieved by having a DbContext type per environment. In your case, this would be per use-case.
Less naively, I usually encapsulate the configuration of the context in an environment-specific unit of work. For example, the unit of work for an Asp.Net environment has an underlying DbContext configured to delegate validation to the web framework, as well as to turn off proxy generation to prevent serialisation issues. I imagine this approach would have similar usefulness to your problem.
For example (using brute force code):
// Foo Configuration which enforces computed columns
public class FooConfiguration : EntityTypeConfiguration<Foo>
{
public FooConfiguration()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
}
}
// Foo configuration that allows computed columns to be overridden
public class FooConfiguration2 : EntityTypeConfiguration<Foo>
{
public FooConfiguration2()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
}
}
// DbContext that enforces computed columns
public class MyContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration());
}
}
// DbContext that allows computed columns to be overridden
public class MyContext2 : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration2());
}
}
This can obviously be tidied up - we usually use a combination of factory and strategy patterns to encapsulate the creation of a runtime specific context. In combination with a DI container this allows the correct set up configuration classes to be injected on a per-environment basis.
Example usage:
[Fact]
public void CanConfigureContextAtRuntime()
{
// Enforce computed columns
using (var context = new EfContext())
{
var foo1 = new Foo();
context.Foos.Add(foo1);
context.SaveChanges();
}
// Allow overridden computed columns
using (var context = new EfContext2())
{
var foo2 = new Foo { DateTime = DateTime.Now.AddYears(-3) };
context.Foos.Add(foo2);
context.SaveChanges();
}
// etc
}

Use Views in Entity Framework

I am using Entity Framework on a project, but am finding the large queries, especially those which use LEFT joins, to be very tedious to write, and hard to debug.
Is it common, or accepted practice, to make use of Views in the database, and then use those views within the EntityFramework? Or is this a bad practice?
the question is not very clear but there is no absolute right or wrong in Software. it all depends on your case.
there is native support for views in ef core but there is no native support for views in EF < 6. at least not in the current latest version 6.3. there is, however, a work around to this. in database first you would create your view via sql normally and when you reverse engineer your database, EF will treat your view as a normal model and will allow you to consume it regularly as you would do in a normal table scenario. in Code First it's a bit more tedious. you would create a POCO object that maps to the columns in your view. notice that you need to include an Id in this POCO class. for example
public class ViewPOCO
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public Guid Id {get;set;}
public string ViewColumn1 {get;set;}
... etc.
}
you would add this POCO class in your DbContext
public class MyDbContext : DbContext
{
public virtual DbSet<ViewPOCO> MyView {get;set;}
}
now you will normally apply the command of adding migration through the package manager console
Add-Migration <MigrationName> <ConnectionString and provider Name>
now in the migration up and down you will notice that EF treats your Model as table. you would clear all of this and write your own sql to add/alter the view in the up and drop the view in the down method using the Sql function.
public override void Up()
{
Sql("CREATE OR ALTER VIEW <ViewName> AS SELECT NEWID() AS Id, ...");
}
public override void Down()
{
Sql("DROP VIEW <ViewName>");
}
First create your view.
Update Your .edmx File.
then use like this.
using (ManishTempEntities obj = new ManishTempEntities())
{
var a = obj.View_1.ToList();
}

XmlSerializer stopped working after updates

I'm using XmlSerializer. I've had no problems with it until now. I updated Silverlight from 4 to 5 and at the same time also updated the WCF RIA Services from v1 SP1 to v1 SP2. Now the following line gives me an error.
XmlSerializer s = new XmlSerializer(typeof(MyCustomObject));
The error is:
System.InvalidOperationException: System.ServiceModel.DomainServices.Client.EntityConflict cannot be serialized because it does not have a parameterless constructor.
The object I'm using (MyCustomObject in the sample) has not changed in any way so I'm starting to think it's either SL5 or the new RIA Services that is breaking my code. I didn't find any breaking changes document or mentions that this could happen. I don't know why it has a problem with EntityConflict since I'm not using any entities within my object.
Has anyone seen an error like this and/or know how to solve it?
UPDATE!
The final property that the error message says before EntityConflict is an Entity. I think that makes a difference but it has been working before. I'd also like to know why the serializer already tries to serialize the object in the constructor?
public static XmlSerializer GetEntityXmlSerializer<TEntity>()
where TEntity : Entity
{
XmlAttributes ignoreAttribute = new XmlAttributes()
{
XmlIgnore = true,
};
// use base class of Entity,
// if you use type of implementation
// you will get the error.
Type entityType = typeof(Entity);
var xmlAttributeOverrides = new XmlAttributeOverrides();
xmlAttributeOverrides.Add(entityType, "EntityConflict", ignoreAttribute);
xmlAttributeOverrides.Add(entityType, "EntityState", ignoreAttribute);
return new XmlSerializer(typeof(TEntity), xmlAttributeOverrides);
}
I am not sure why this would be happening, RIA Services entities are not XmlSerializable objects and the entities themselves are not decorated with the [Serializable] attribute. Have you added partial classes on the client side which decorate the entities with [Serializable] or modified the code generation in some way?
I got around this problem by using intermediary serializable POCO objects which were copies of my custom objects (which were inherited from Entity). The POCO objects did not inherit from Entity. I just updated their values from the original Entity objects. They then serialized quite nicely. Of course, when you de-serialize you need to update your Entity objects from the POCO objects.

Resources