Entity Framework efficient querying - sql-server

Lets say I have a model, Article that has a large amount of columns and the database contains more than 100,000 rows. If I do something like var articles = db.Articles.ToList() it is retrieving the entire article model for each article in the database and holding it in memory right?
So if I am populating a table that only shows the date of the entry and it's title is there a way to only retrieve just these columns from the database using the entity framework, and would it be more efficient?
According to this,
There is a cost required to track returned objects in the object
context. Detecting changes to objects and ensuring that multiple
requests for the same logical entity return the same object instance
requires that objects be attached to an ObjectContext instance. If you
do not plan to make updates or deletes to objects and do not require
identity management , consider using the NoTracking merge options when
you execute queries.
it looks like I should use NoTracking since the data isn't being changed or deleted, only displayed. So my query now becomes var articles = db.Articles.AsNoTracking().ToList(). Are there other things I should do to make this more efficient?
Another question I have is that according to this answer, using .Contains(...) will cause a large performance drop when dealing with a large database. What is the recommended method to use to search through the entries in a large database?

It's called a projection and just translates into a SELECT column1, column2, ... in SQL:
var result = db.Articles
.Select(a => new
{
Date = a.Date,
Title = a.Title
})
.ToList();
Instead of a => new { ... } (creates a list of "anonymous" objects) you can also use a named helper class (or "view model"): a => new MyViewModel { ... } that contains only the selected properties (but you can't use a => new Article { ... } as an entity itself).
For such a projection you don't need AsNoTracking() because projected data are not tracked anyway, only full entity objects are tracked.
Instead of using Contains the more common way is to use Where like:
var date = DateTime.Now.AddYears(-1);
var result = db.Articles
.Where(a => date <= a.Date)
.Select(a => new
{
Date = a.Date,
Title = a.Title
})
.ToList();
This would select only the articles that are not older than a year. The Where is just translated into a SQL WHERE statement and the filter is performed in the database (which is as fast as the SQL query is, depending on table size and proper indexing, etc.). Only the result of this filter is loaded into memory.
Edit
Refering to your comment below:
Don't confuse IEnumerable<T>.Contains(T t) with string.Contains(string subString). The answer you have linked in your question talks about the first version of Contains. If you want to search for articles that have the string "keyword" in the text body you need the second Contains version:
string keyword = "Entity Framework";
var result = db.Articles
.Where(a => a.Body.Contains(keyword))
.Select(a => new
{
Date = a.Date,
Title = a.Title
})
.ToList();
This will translate into something like WHERE Body like N'%Entity Framework%' in SQL. The answer about the poor performance of Contains doesn't apply to this version of Contains at all.

Related

Optimize IQueryable query to let EF generate a single SQL query instead multiple. Child collection of an entity must contains a custom collection

The goal is to have a single query that will be generated by the EF and MSSQL will execute it in one go. Having the current implementation, everything works correctly, but not optimal. To be more specific, looking at the SQL Server Profiler logs, it makes additional exec sp_executesql queries per each company to fetch data (in example below, it would be Products).
Say, we have selected product ids.
List<int> selectedProductIds = { 1, 2, 3 };
We filter over a collection of Companies to get only those companies that have ALL selected products.
And a query where we dynamically extend it as many as we need, thankfully to IQueryable interface.
Imagine x of type Company and it contains a collection of Products.
if (selectedProductIds.Count > 0)
{
query = query.Where(x => selectedProductIds.All(id => x.Products.Select(p => p.ProductId).Contains(id)));
}
Is there any way to rewrite the predicate using LINQ? I know I can make a dynamic SQL query myself anytime, but I am trying to understand what I miss in terms of EF/LINQ. Thanks!
The version of Entity Framework Core is 2.1.
UPD:
Company products are unique and never duplicated within a company entity. No need to make distinct.
Try the following query:
if (selectedProductIds.Count > 0)
{
query = query.Where(x => x.Products
.Where(p => selectedProductIds.Contains(p.ProductId))
.Count() == selectedProductIds.Count
);
}

Common strategy in handling concurrent global 'inventory' updates

To give a simplified example:
I have a database with one table: names, which has 1 million records each containing a common boy or girl's name, and more added every day.
I have an application server that takes as input an http request from parents using my website 'Name Chooser' . With each request, I need to pick up a name from the db and return it, and then NOT give that name to another parent. The server is concurrent so can handle a high volume of requests, and yet have to respect "unique name per request" and still be high available.
What are the major components and strategies for an architecture of this use case?
From what I understand, you have two operations: Adding a name and Choosing a name.
I have couple of questions:
Qustion 1: Do parents choose names only or do they also add names?
Question 2 If they add names, doest that mean that when a name is added it should also be marked as already chosen?
Assuming that you don't want to make all name selection requests to wait for one another (by locking of queueing them):
One solution to resolve concurrency in case of choosing a name only is to use Optimistic offline lock.
The most common implementation to this is to add a version field to your table and increment this version when you mark a name as chosen. You will need DB support for this, but most databases offer a mechanism for this. MongoDB adds a version field to the documents by default. For a RDBMS (like SQL) you have to add this field yourself.
You havent specified what technology you are using, so I will give an example using pseudo code for an SQL DB. For MongoDB you can check how the DB makes these checks for you.
NameRecord {
id,
name,
parentID,
version,
isChosen,
function chooseForParent(parentID) {
if(this.isChosen){
throw Error/Exception;
}
this.parentID = parentID
this.isChosen = true;
this.version++;
}
}
NameRecordRepository {
function getByName(name) { ... }
function save(record) {
var oldVersion = record.version - 1;
var query = "UPDATE records SET .....
WHERE id = {record.id} AND version = {oldVersion}";
var rowsCount = db.execute(query);
if(rowsCount == 0) {
throw ConcurrencyViolation
}
}
}
// somewhere else in an object or module or whatever...
function chooseName(parentID, name) {
var record = NameRecordRepository.getByName(name);
record.chooseForParent(parentID);
NameRecordRepository.save(record);
}
Before whis object is saved to the DB a version comparison must be performed. SQL provides a way to execute a query based on some condition and return the row count of affected rows. In our case we check if the version in the Database is the same as the old one before update. If it's not, that means that someone else has updated the record.
In this simple case you can even remove the version field and use the isChosen flag in your SQL query like this:
var query = "UPDATE records SET .....
WHERE id = {record.id} AND isChosend = false";
When adding a new name to the database you will need a Unique constrant that will solve concurrenty issues.

OrientDB - find "orphaned" binary records

I have some images stored in the default cluster in my OrientDB database. I stored them by implementing the code given by the documentation in the case of the use of multiple ORecordByte (for large content): http://orientdb.com/docs/2.1/Binary-Data.html
So, I have two types of object in my default cluster. Binary datas and ODocument whose field 'data' lists to the different record of binary datas.
Some of the ODocument records' RID are used in some other classes. But, the other records are orphanized and I would like to be able to retrieve them.
My idea was to use
select from cluster:default where #rid not in (select myField from MyClass)
But the problem is that I retrieve the other binary datas and I just want the record with the field 'data'.
In addition, I prefer to have a prettier request because I don't think the "not in" clause is really something that should be encouraged. Is there something like a JOIN which return records that are not joined to anything?
Can you help me please?
To resolve my problem, I did like that. However, I don't know if it is the right way (the more optimized one) to do it:
I used the following SQL request:
SELECT rid FROM (FIND REFERENCES (SELECT FROM CLUSTER:default)) WHERE referredBy = []
In Java, I execute it with the use of the couple OCommandSQL/OCommandRequest and I retrieve an OrientDynaElementIterable. I just iterate on this last one to retrieve an OrientVertex, contained in another OrientVertex, from where I retrieve the RID of the orpan.
Now, here is some code if it can help someone, assuming that you have an OrientGraphNoTx or an OrientGraph for the 'graph' variable :)
String cmd = "SELECT rid FROM (FIND REFERENCES (SELECT FROM CLUSTER:default)) WHERE referredBy = []";
List<String> orphanedRid = new ArrayList<String>();
OCommandRequest request = graph.command(new OCommandSQL(cmd));
OrientDynaElementIterable objects = request.execute();
Iterator<Object> iterator = objects.iterator();
while (iterator.hasNext()) {
OrientVertex obj = (OrientVertex) iterator.next();
OrientVertex orphan = obj.getProperty("rid");
orphanedRid.add(orphan.getIdentity().toString());
}

What is the preferred way to set foreign relationships for an entity?

Product product = new Product() {
Category = category
};
_session.CommitChanges();
vs.
Product product = new Product() {
Category.ID = category.ID
};
_session.CommitChanges();
What is the difference? Which one to use? Both seem to be valid and get correctly saved in the database.
Always use the first version. This will make sure all the relations are in a state they should be , because the keys and stuff are assigned and managed by the RelationshipManager.
I just pass Category.ID.
When the query is executed against the database the only information that is passed to Product is the Category ID, that is, this is the info that will be stored in DB (in each Product row).
Behind the curtains, the engine knows the fields necessary to generate the SQL insert command, in this case, only Category.ID is necessary when saving the Product. That's why no matter what option you choose, the save operation always succeeds.

Hitting the 2100 parameter limit (SQL Server) when using Contains()

from f in CUSTOMERS
where depts.Contains(f.DEPT_ID)
select f.NAME
depts is a list (IEnumerable<int>) of department ids
This query works fine until you pass a large list (say around 3000 dept ids) .. then I get this error:
The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Too many parameters were provided in this RPC request. The maximum is 2100.
I changed my query to:
var dept_ids = string.Join(" ", depts.ToStringArray());
from f in CUSTOMERS
where dept_ids.IndexOf(Convert.ToString(f.DEPT_id)) != -1
select f.NAME
using IndexOf() fixed the error but made the query slow. Is there any other way to solve this? thanks so much.
My solution (Guids is a list of ids you would like to filter by):
List<MyTestEntity> result = new List<MyTestEntity>();
for(int i = 0; i < Math.Ceiling((double)Guids.Count / 2000); i++)
{
var nextGuids = Guids.Skip(i * 2000).Take(2000);
result.AddRange(db.Tests.Where(x => nextGuids.Contains(x.Id)));
}
this.DataContext = result;
Why not write the query in sql and attach your entity?
It's been awhile since I worked in Linq, but here goes:
IQuery q = Session.CreateQuery(#"
select *
from customerTable f
where f.DEPT_id in (" + string.Join(",", depts.ToStringArray()) + ")");
q.AttachEntity(CUSTOMER);
Of course, you will need to protect against injection, but that shouldn't be too hard.
You will want to check out the LINQKit project since within there somewhere is a technique for batching up such statements to solve this issue. I believe the idea is to use the PredicateBuilder to break the local collection into smaller chuncks but I haven't reviewed the solution in detail because I've instead been looking for a more natural way to handle this.
Unfortunately it appears from Microsoft's response to my suggestion to fix this behavior that there are no plans set to have this addressed for .NET Framework 4.0 or even subsequent service packs.
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=475984
UPDATE:
I've opened up some discussion regarding whether this was going to be fixed for LINQ to SQL or the ADO.NET Entity Framework on the MSDN forums. Please see these posts for more information regarding these topics and to see the temporary workaround that I've come up with using XML and a SQL UDF.
I had similar problem, and I got two ways to fix it.
Intersect method
join on IDs
To get values that are NOT in list, I used Except method OR left join.
Update
EntityFramework 6.2 runs the following query successfully:
var employeeIDs = Enumerable.Range(3, 5000);
var orders =
from order in Orders
where employeeIDs.Contains((int)order.EmployeeID)
select order;
Your post was from a while ago, but perhaps someone will benefit from this. Entity Framework does a lot of query caching, every time you send in a different parameter count, that gets added to the cache. Using a "Contains" call will cause SQL to generate a clause like "WHERE x IN (#p1, #p2.... #pn)", and bloat the EF cache.
Recently I looked for a new way to handle this, and I found that you can create an entire table of data as a parameter. Here's how to do it:
First, you'll need to create a custom table type, so run this in SQL Server (in my case I called the custom type "TableId"):
CREATE TYPE [dbo].[TableId] AS TABLE(
Id[int] PRIMARY KEY
)
Then, in C#, you can create a DataTable and load it into a structured parameter that matches the type. You can add as many data rows as you want:
DataTable dt = new DataTable();
dt.Columns.Add("id", typeof(int));
This is an arbitrary list of IDs to search on. You can make the list as large as you want:
dt.Rows.Add(24262);
dt.Rows.Add(24267);
dt.Rows.Add(24264);
Create an SqlParameter using the custom table type and your data table:
SqlParameter tableParameter = new SqlParameter("#id", SqlDbType.Structured);
tableParameter.TypeName = "dbo.TableId";
tableParameter.Value = dt;
Then you can call a bit of SQL from your context that joins your existing table to the values from your table parameter. This will give you all records that match your ID list:
var items = context.Dailies.FromSqlRaw<Dailies>("SELECT * FROM dbo.Dailies d INNER JOIN #id id ON d.Daily_ID = id.id", tableParameter).AsNoTracking().ToList();
You could always partition your list of depts into smaller sets before you pass them as parameters to the IN statement generated by Linq. See here:
Divide a large IEnumerable into smaller IEnumerable of a fix amount of item

Resources