adding new methods to LINQ to Entities - sql-server

Is there any way to define the SQL conversion component for additional functions to Linq2Entities.
For example:
myQuery.Where(entity => entity.Contains('foo', SearchFlags.All))
Ideally I am looking for something that doesn't require editing and building a new version the EntityFramework.dll directly. Is there any way to allow extension methods to entity framework that can support SQL generation.
So far I have a template which would represent the method I need to replace for LINQ to Entities:
public static bool Contains(this object source, string searchTerms, SearchFlags flags)
{
return true;
}
Of course this causes the error:
LINQ to Entities does not recognize the method 'Boolean
CONTAINS(System.Object, System.String, SearchFlags)' method, and this method
cannot be translated into a store expression.
To be clear, I don't want to do:
myQuery.AsEnumerable().Where(entity => entity.Contains('foo', SearchFlags.All))
Because I want to be able to execute code in SQL space and not return all the entities manually.
I also cannot use the .ToString() of the IQueryable and execute it manually because I need Entity Framework to populate the objects from several .Include joins.

I don't understand your Q clearly. However if your problem is that you can't use your own methods or other linq to objects method, just use .AsEnumerable() and do your other jobs through linq to objects, not L2E:
myQuery.AsEnumerable().Where(entity => entity.Contains('foo', SearchFlags.All))
And if you need to use your myQuery several times somewhere else, first load it to memory, then use it as many as you want:
var myQuery = from e in context.myEntities
select d;
myQuery.Load();
// ...
var myOtherQuery = from d in context.myEntities.Local
select d;
// Now any L2O method is supported...

I ended up doing the following (which works but is very far from perfect):
All my entities inherit from an IEntity which defines long Id { get; set; }
I then added a redundant restriction
context.myEntities.Where(entity => entity.Id != 0) this is
redundant since the identity starts at 1, but Linq2Entities doesn't
know that.
I then call .ToString() on the IQueryable after I have done all
my other queries, since it is of type DBQuery<Entity> it returns
the SQL Command Text, I do a simple replace with my query restriction.
In order to get all the .Include(...) to work I actually execute
two different sql commands. There is no other more pretty way to tap into this because of query execution plan caching causes issues otherwise (even when disabled).
As a result my code looks like this:
public IQueryable<IEntity> MyNewFunction(IQueryable<IEntity> myQueryable, string queryRestriction)
{
string rawSQL = myQueryable.Select(entity => entity.Id).ToString().Replace("[Extent1].Id <> 0", queryRestriction);
List<long> ids = // now execute rawSQL, get the list of ids;
return myQuerable.Where(entity => ids.Contains(entity.Id));
}
In short, other than manually executing the SQL or running a similar SQL command and appending the restriction using the existing commands the only way to write your own methods to Linq-to-Entities is to manually alter and build your own EntityFramework.dll from the EF6 source.

Related

EF Core 3.1 Fail to query on Json Serialized Object

I used json serialization to store list on ids in a field
Model:
public class Video
{
public int Id { get; set; }
public string Name { get; set; }
public virtual IList<int> AllRelatedIds { get; set; }
}
Context:
modelBuilder.Entity<Video>(entity =>
{
entity.Property(p => p.AllRelatedIds).HasConversion(
v => JsonConvert.SerializeObject(v, new JsonSerializerSettings { NullValueHandling = NullValueHandling.Ignore }),
v => JsonConvert.DeserializeObject<IList<int>>(v, new JsonSerializerSettings { NullValueHandling = NullValueHandling.Ignore })
);
});
It works fine, Adding, Editing, Deleting items is easy and in SQL Database it stores as json like
[11000,12000,13000]
Everything is fine BUT!! as soon as want to query on this list I get weird responses.
Where:
_context.Set<Video>().Where(t=>t.AllRelatedIds.contains(11000)) returns null however if I ask to return all AllRelatedIds items some records have 11000 value exp.
Count:
_context.Set<Video>().Count(t=>t.AllRelatedIds.contains(11000)) returns could not be translated. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to either AsEnumerable(), AsAsyncEnumerable(), ToList(), or ToListAsync().
What's the matter with EF Core? I even tested t=>t.AllRelatedIds.ToList().contains(11000) but made no difference
What I should do? I don't want to have more tables, I used this methods hundreds of times but seems never queried on them.
The Json Serialization/Deserialization happens at application level. EF Core serializes the IList<int> object to value [11000,12000,13000] before sending it to database for storing, and deserializes the value [11000,12000,13000] to IList<int> object after retrieving it from the database. Nothing happens inside the database. Your database cannot operate on [11000,12000,13000] as a collection of number. To the database, its a single piece of data.
If you try the following queries -
var videos = _context.Set<Video>().ToList();
var video = _context.Set<Video>().FirstOrDefault(p=> p.Id == 2);
you'll get the expected result, EF Core is doing it's job perfectly.
The problem is, when you query something like -
_context.Set<Video>().Where(t=> t.AllRelatedIds.Contains(11000))
EF Core will fail to translate the t.AllRelatedIds.Contains(11000) part to SQL. EF Core can only serialize/deserialize it because you told it to (and how). But as I said above, your database cannot operate on [11000,12000,13000] as a collection of integer. So EF Core cannot translate the t.AllRelatedIds.Contains(11000) to anything meaningful to the database.
A solution will be to fetch the list of all videos, so that EF Core can deserialize the AllRelatedIds to IList<int>, then you can apply LINQ on it -
var allVideos = _context.Set<Video>().ToList();
var selectedVideos = allVideos.Where(t=> t.AllRelatedIds.Contains(11000)).ToList();
But isn't fetching ALL videos each time unnecessary/overkill or inefficient from performance perspective? Yes, of course. But as the comments implied, your database design/usage approach has some flaws.

BreezeJS and Stored Procedures

I am new to BreezeJS and would like to know if there are any examples on how to use Breeze with an SQL Stored Procedure?
We have some pretty complex queries and want to be able to call them via an SP. Also, how can we tell Breeze that a column returned from a SP is the Key? We don't want to use Views, because we need to pass variables to the SP query each time we call it.
Thanks.
bob
Ok, the basic idea would be to use Breeze's EntityQuery.withParameters method to pass parameters to a server side method that calls your stored proc and returns an IEnumerable. ( i.e. the result of the stored proc).
If you want to treat this result as collection of Breeze entities then you will either need to shape the results into an existing entity type that Breeze knows about from Metadata OR manually create and add a new EntityType on the client that matches the shape that you want to return.
You may want to look at the EntityQuery.toType method to force breeze to convert your returned data into a specific EntityType or you might alternately want to use a "jsonResultsAdapter" to do the same thing.
Any data that is returned from a query and is converted into an Breeze EntityType is automatically wrapped according the "modelLibrary" in use, i.e. Knockout, Angular, Backbone etc.
If breeze is not able to construct entities out of the returned data then it will still be returned but without any special processing to wrap the result.
Hope this helps!
A sample to access Sql Stored Procedures from Breeze; the store procedure (GoStoCde) has been imported by EF.
Breeze Controller :
[HttpGet]
public object GetCdes(long jprod, int jqte, long jorder)
{
//output params
var owrk = new System.Data.Objects.ObjectParameter("wkres", typeof(string));
owrk.Value = "";
var oeror = new System.Data.Objects.ObjectParameter("ceror", typeof(int));
oeror.Value = 0;
//invoke stored procedure
var envocde = _contextProvider.Context.GoStoCde(jprod, jqte, jorder, owrk, oeror);
//stored procedure results
var cdeResult = new {
dwork = owrk.Value,
deror = oeror.Value,
};
return new { cdeResult };
}
Datacontext :
function reqLnecde(iprod, iqte, iorder, vxeror) {
logger.log("commande en cours...");
var query = new EntityQuery.from("GetCdes")
.withParameters({ jprod: iprod, jqte: iqte, jorder: iorder });
return manager
.executeQuery(query)
.then(querySucceeded)
.fail(cqueryFailed);
function querySucceeded(data) {
//stored procedure results
vxeror(data.results[0]);
//stored procedure object member value
keror = vxeror().cdeResult.deror;
if (keror === 0) {
logger.log("commande done");
} else {
logger.log("article absent");
}
}
function queryFailed(data) {
logger.log("commande failed"); //server errors
}
}
If you prefer to return entity in lieu of object, code consequently and its must also work.
Hope this helps!
Not really an answer here, just a few thoughts.
I think that the ability to return arbitrarily shaped data (read viewmodel) through the use of a stored procedure using withParameters would be an excellent way to inegerate with something like dapper.net. Upon resubmission of said viewmodel you could use the overloads to reconstruct actual entities out of your viewmodel and save changes. The only problem I have though is that one would need a way to easily and automaticaly rerun the sproc and send the data back to the client...
I would like to know if this makes sense to anyone else and/or if anyone has done it already.
For this sort of scenario I would think that you would need to disable the tracking features provided by breeze and/or write a smart enough data service that can handle the viewmodels in such a way that the javascript on the client knows when adding/removing/updating parts x,y,z of viewmodel a that you create objects jx, jy, jz (j for javascript) and submit them back and save as you go (reverse idea of what was mentioned above in a way)
Thoughts?

Dapper Correct Object / Aggregate Mapping

I have recently started evaluating Dapper as a potential replacement for EF, since I was not too pleased with the SQL that was being generated and wanted more control over it. I have a question regarding mapping a complex object in my domain model. Let's say I have an object called Provider, Provider can contain several properties of type IEnumerable that should only be accessed by going through the parent provider object (i.e. aggregate root). I have seen similar posts that have explained using the QueryMultiple and a Map extension method but was wondering how if I wanted to write a method that would bring back the entire object graph eager loaded, if Dapper would be able to do this in one fell swoop or if it needed to be done piece-meal. As an example lets say that my object looked something like the following:
public AggregateRoot
{
public int Id {get;set;}
...//simple properties
public IEnumerable<Foo> Foos
public IEnumerable<Bar> Bars
public IEnumerable<FooBar> FooBars
public SomeOtherEntity Entity
...
}
Is there a straightforward way of populating the entire object graph using Dapper?
I have a similar situation. I made my sql return flat, so that all the sub objects come back. Then I use the Query<> to map the full set. I'm not sure how big your sets are.
So something like this:
var cnn = sqlconnection();
var results = cnn.Query<AggregateRoot,Foo,Bars,FooBar,someOtherEntity,AggregateRoot>("sqlsomething"
(ar,f,b,fb,soe)=>{
ar.Foo = f;
ar.Bars = b;
ar.FooBar = fb;
ar.someotherentity = soe;
return ar;
},.....,spliton:"").FirstOrDefault();
So the last object in the Query tag is the return object. For the SplitOn, you have to think of the return as a flat array that the mapping will run though. You would pick the first return value for each new object so that the new mapping would start there.
example:
select ID,fooid, foo1,foo2,BarName,barsomething,foobarid foobaritem1,foobaritem2 from blah
The spliton would be "ID,fooid,BarName,foobarid". As it ran over the return set, it will map the properties that it can find in each object.
I hope that this helps, and that your return set is not too big to return flat.

Entity Framework 4 - Trim Database Char(50) value for Name on legacy database

This should be simple, but I haven't found a way yet...
I have a legacy database with name fields that are stored as CHAR(50). When this is bound to a TextBox with a Max Length of 50, you cannot insert.
How can I make the EF trim these values or at least map to RTrim(Column)?
I've tried using value converters, but the round trip causes issues with back spacing and spaces getting deleted between words.
Note that I only want to trim some fields, not all.
We are using SQL Server 2000 as the database. Soon to move to SQL 2008.
Thanks!
Entity framework is able to map only to table directly. You can also map to view or custom DB query but in such case your entity will became readonly unless you also map Insert, Delete and Update operations to stored procedures.
I think the problem you describes is related to ANSI PADDING behavior. It can be turned on but:
It is not recommended. In future version of SQL server it will be considered as error.
Must be configured before you create a column
You must handle trimming in the application. You can for example modify T4 template (if you use them) to trim string properties. Not sure how it works with WPF but you probably can inherit text box and override Text property to trim values.
Another way is handling ObjectMaterialized event on ObjectContext and manually trimming text properties but it can slow down your execution of your queries.
There's no way to do this with EF and SQL Server that I have found. I solved it with an extension method on IEnumerable<T> that calls TrimEnd() on each string property:
public static IEnumerable<TEntity> Trim<TEntity>(this IEnumerable<TEntity> collection)
{
Type type = typeof(TEntity);
IEnumerable<PropertyDescriptor> properties = TypeDescriptor.GetProperties(type).Cast<PropertyDescriptor>()
.Where(p => p.PropertyType == typeof(string));
foreach (TEntity entity in collection)
{
foreach (PropertyDescriptor property in properties)
{
string value = (string) property.GetValue(entity);
if (!String.IsNullOrEmpty(value))
{
value = value.TrimEnd();
property.SetValue(entity, value);
}
}
}
return collection;
}
Just make sure you call it after EF has retrieved the entities from the database. For example, after ToList():
public IEnumerable<Country> FetchCountries()
{
return _context.Set<Country>().ToList().Trim();
}
Have a look at the available attributes for your Database Connection String. I had a similar issue with Sybase Advantage database and solved with it's TrimTrailingSpaces attribute. Your database may support something similar.
Data Source=\\serverx\volumex\path\db.add;User ID=user;Password=pass;ServerType=REMOTE;TrimTrailingSpaces=TRUE;
http://www.connectionstrings.com/

Designing a generic DB utility class

Time and again I find myself creating a database utility class which has multiple functions which all do almost the same thing but treat the result set slightly differently.
For example, consider a Java class which has many functions which all look like this:
public void doSomeDatabaseOperation() {
Connection con = DriverManager.getConnection("jdbc:mydriver", "user", "pass");
try {
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT whatever FROM table"); // query will be different each time
while (rs.next()) {
// handle result set - differently each time
}
} catch (Exception e) {
// handle
} finally {
con.close();
}
}
Now imagine a class with 20 of these functions.
As you can see, tons of boilerplate (opening a connection, try-finally block), and the only thing that changes would be the query and the way you handle the result set. This type of code occurs in many languages (considering you're not using an ORM).
How do you manage your DB utility classes so as to reduce code duplication? What does a typical DB utility class look like in your language/framework?
The way I have done in one of my project is that I followed what Spring does with JDBC template and came up with a Query framework. Basically create a common class which can take select statement or pl/sql calls and bind parameters. If the query returns resultset, also pass the Rowmapper. This rowmapper object will be called by the framework to convert each row into an object of any kind.
Example -
Query execute = new Query("{any select or pl/sql}",
// Inputs and Outputs are for bind variables.
new SQL.Inputs(Integer.class, ...),
// Outputs is only meaningful for PL/SQL since the
// ResultSetMetaData should be used to obtain queried columns.
new SQL.Outputs(String.class));
If you want the rowmapper -
Query execute = new Query("{any select or pl/sql}",
// Inputs and Outputs are for bind variables.
new SQL.Inputs(Integer.class, ...),
// Outputs is only meaningful for PL/SQL since the
// ResultSetMetaData should be used to obtain queried columns.
new SQL.Outputs(String.class), new RowMapper() {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
Actor actor = new Actor();
actor.setFirstName(rs.getString("first_name"));
actor.setSurname(rs.getString("surname"));
return actor;
});
Finally a Row class is the output which will have list of objects if you have passed the RowMapper -
for (Row r : execute.query(conn, id)) {
// Handle the rows
}
You can go fancy and use Templates so that type safety is guaranteed.
Sounds like you could make use of a Template Method pattern here. That would allow you to define the common steps (and default implementations of them, where applicable) that all subclasses will take to perform the action. Then subclasses need only override the steps which differ: SQL query, DB-field-to-object-field mapping, etc.
When using .net, the Data Access Application Block is in fairly widespread use to provide support for the following:
The [data access] application block
was designed to achieve the following
goals:
Encapsulate the logic used to perform
the most common data access tasks.
Eliminate common coding errors, such
as failing to close connections.
Relieve developers of the need to
write duplicated code for common data
access tasks.
Reduce the need for
custom code.
Incorporate best
practices for data access, as
described in the .NET Data Access
Architecture Guide.
Ensure that, as
far as possible, the application block
functions work with different types of
databases.
Ensure that applications
written for one type of database are,
in terms of data access, the same as
applications written for another type
of database.
There are plenty of examples and tutorials of usage too: a google search will find msdn.microsoft, 4guysfromrolla.com, codersource.com and others.

Resources