Here's the story. I've got a generic save web service call that uses Entity Framework to save any entity object. The reasons it's done in this way is, one, there's a very tight schedule for this, and two, we're talking to an old version of Windows Mobile, so there's little support for better, more modern ways of doing this. The meat of this is an array of EntityObjects that's passed into the web service, which then:
Using TESTEntity As New TESTEntities(EntityConnectionString)
For Each change As Object In Changes
Dim item As EntityObject = DirectCast(change, EntityObject)
Dim NewState As EntityState
If (item.EntityKey Is Nothing) Then
NewState = EntityState.Added
Else
NewState = EntityState.Modified
End If
TESTEntity.AttachTo(change.GetType().Name, item)
TESTEntity.ObjectStateManager.ChangeObjectState(item, NewState)
Next
TESTEntity.SaveChanges()
End Using
This function worked fine for a while. It still works fine for us here at development, but it fails strangely at a client. It gets to SaveChanges and then just hangs until a SQLTimeoutException is thrown.
First, I've tried turning up the CommandTimeout value without any luck.
Second, this webservice is called numerous times all over the place, but appears to only fail in one specific place, and only on the client site but not here. We don't have easy access to the client site or database, but we've confirmed that the database schema's match and we're fairly certain their mobile software is the most recent version (We're confirming that tonight).
Third, we've run the SQL profiler on the client's SQL Server and see that the only request that's being made is the Audit Login without any other queries being run. This SaveChanges is only doing a small handful of Inserts and Updates, so it shouldn't be timing out in the first place.
If there was a bad entity object or something about the schema didn't match, I would expect a different exception, or at least the SQL Profiler to show something other than just the Audit Login... I'm at a loss as to what is happening, any thoughts?
Related
I have an EF 6.2 project in my MVC solution.
This uses a SQL server db and has about 40 tables with many foreign keys.
The first query is very slow, 20 secs.
I immediately hit the same page again, changing the user param, and the query takes less than 1 second.
So this looks like a warm up issue in EF6. That's fine and there's loads of things i can do to sort apparently.
The Model cache (part of EF6.2) looks like it could be beneficial, but everywhere i read about it states model first. Nothing about DB first. Would this still work with db first?
Also there's the Entity Framework 6 power tools, these allow for me to Generate Views. Tried this and it doesn't seem to make any difference. Is this still a valid route?
Any other ideas?
EF DbContexts incur a one-off cost to resolve their entity mappings. For web applications you can mitigate this by having your application start up kick off a simple query against the DbContext which "kicks off" this warm-up rather than during your first user-triggered query. Simply new-ing up a context doesn't trigger the initialization, running a query does. So for ASP.Net MVC on the Application_Start, after initializing everything:
using (var context = new MyContext())
{
var warmup = context.MyTable.Count(); // against a small table.
}
You can test this behaviour with unit tests by having a suite of timed tests that read data from the DbContext, and putting a break-point in DbContext's OnModelCreating event. It will be executed just once from the first test with the first query. You can add a OneTimeSetUp in a test fixture setup to run ahead of the tests with the above quick count example to incur this cost before measuring the performance of the test runs.
So, the answer was to update EF to 6.2 then use the newest feature:
public class MyDbConfiguration : DbConfiguration
{
public MyDbConfiguration() : base()
{
var path = Path.GetDirectoryName(this.GetType().Assembly.Location);
SetModelStore(new DefaultDbModelStore(path));
}
}
for the full story check out this link: https://entityframework.net/why-first-query-slow
Your gonna take a small performance hit at startup but then it all moves a lot faster.
For anyone using an Azure web app you can use a deployment slot (https://stackify.com/azure-deployment-slots/), this allow you to publish into a non-production slot then warm it up before swapping it in as the production slot.
following on from this question:
GWT detect GAE version changes and reload
I would like to further clarify some things.
I have an enterprise app (GWT 2.4 & GAEJ 1.6.4 - using GWT-RPC) that my users typically run all day in their browsers, indeed some don't bother refreshing the browser from day to day. I make new releases on a pretty regular basis, so am trying to streamline the process for minimal impact to my users. - Not all releases concern all users, so I'd like to minimize the number of restarts.
I was hoping it might be possible to do the following. Categorize my releases as follows:
1) releases that will cause an IncompatibleRemoteServiceException to be thrown
and 2) those that don't : i.e. only affect the server, or client but not the RPC interface.
Then I could make lots of changes to the client and server without affecting the interface between the two. As long as I don't make a modification to the RPC interface, presumably I can change server code and or client code and the exception won't be thrown? Right? or will any redeployment of GAE cause an old client to get an IncompatibleRemoteServiceException ?
If I was able to do that I could batch up interface busting changes into fairly infrequent releases and notify my users a restart will be required.
many thanks for any help.
I needed an answer pretty quick so I thought I'd just do some good old fashioned testing to see what's possible. Hopefully this will be useful for others with production systems using GWT-RPC.
Goal is to be able to release updates / fixes without requiring all connected browsers to refresh. Turns out there is quite a lot you can do.
So, after my testing, here's what you can and can't do:
no problem
add a new call to a RemoteService
just update some code on the server e.g. simple bug fix, redeploy
just update some client (GWT) code and redeploy (of course anyone wanting new client functionality will have to refresh browser, but others are unaffected)
limited problems
add a parameter to an existing RemoteService method - this one is interesting, that particular call will throw "IncompatibleRemoteServiceException" (of course) but all others calls to the same Remote Service or other Remote Services (Impl's) are unaffected.
Add a new type (as a parameter) to any method within a RemoteService - this is the most interesting one, and is what led me to do this testing. It will render that whole RemoteService out of date for existing clients with IncompatibleRemoteServiceException. However you can still use other RemoteServices. - I need to do some more testing here to fully understand or perhaps someone else knows more?
so if you know what you're doing you can do quite a lot without having to bother your users with refreshes or release announcements.
I have an ASP.NET reporting interface that displays several values that are returned from a SQL Server backend. Once logged in, the browser page is never reloaded, but several screen areas are updated on a timer through AJAX calls.
My problem is that the screen areas are intermittently displaying values that are coming from previous AJAX calls. I have thoroughly and intensively investigated the problem for a number of days and I haven't been able to specifically pinpoint what is causing it, or how to completely overcome it. Currently the incorrect values are very infrequent (3 in 50,000, say), but I should be getting none whatsoever! These are some details about the setup:
the screen refresh timer runs every 30 seconds to update all screen areas
there is a 1.5 second lag between the screen area updates
the values used to decide which SQL stored proc to run to get the correct values from the database for each screen area are being passed in to the ASP.NET interface correctly - I know with 100% certainty which stored proc to run
the stored proc returns its values to a SQLDataReader
it is the reader that is sometimes yielding values that seem to be "buffered" from previous AJAX interactions, i.e. I am running the correct stored proc with the correct variables, but the values returned are not what I get if I ran that precise same command in a SQL query interface - they are results from a previous call
the SQL connection, command and reader are all created and instantiated afresh for each interaction and disposed of correctly after use through the IDisposable interface
I have swapped the reader for a dataset, with no difference in results
my AJAX calls are synchronous (async=false), so they should each complete before the next one is run, but I also have the 1.5 sec delay between screen area updates and 30 secs between cycles, so they shouldn't run into each other in any event.
What is frustrating is that the reader is running a SQL statement without throwing an exception, but seemingly returning results from a previous interaction - and then only very seldom, but one incorrect result is one too many.
I am not using ASP.NET state management at all - switched off in web.config.
What am I missing?
Profile the sql statements that are arriving at sql server with sql server profile. You can narrow the potential locations of the bug that way. next, start the fiddler http debugger and verify the http requests. and let us know what you found!
it is the reader that is sometimes
yielding values that seem to be
"buffered" from previous AJAX
interactions, i.e. I am running the
correct stored proc with the correct
variables, but the values returned are
not what I get if I ran that precise
same command in a SQL query interface
- they are results from a previous call the SQL connection, command and
reader are all created and
instantiated afresh for each
interaction and disposed of correctly
after use through the IDisposable
interface I have swapped the reader
for a dataset, with no difference in
results
It would help to show us your code here. If the data objects contain the wrong values, you have to demonstrate with certainty that the correct SPs are being run on the back end and, if applicable, that the correct parameters are being passed to the back end. You should be able to step through the call in the Visual Studio debugger.
I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade)
This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB.
Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server.
What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted.
Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?
My guess is that the default ISession FlushMode has been changed from Auto to Never or Commit. Never means that the session will flush when Flush() is called by the application; Commit means that the session will flush when a transaction is committed.
Back out your current deployment and return to what you had before. Then look for the mistake someone made. If it used to insert and now does not, then something is wrong with your current code. If it isn't creating the insert/update statments, then I'd look first at where they are supposed to be created. Did the current deplyment actually insert record or update them in dev? Did anybody test that or were you relying onthe fact that it didn;t pop up an error? If it did work in dev and doesn't work in prod, I'd look at the envirnmental differnences between dev and prod.
Both good answers, the problem was in the deployment. The web.config was setup for IIS6, and the deployment to IIS7 did not properly setup the open session in view HttpModule that is used to commit the transaction. Changing the pipeline mode from Integrated to Classic solved the problem.
We are using ASP.NET MVC with LINQ to SQL. We added some features and tested them all to perfection on our QA box. We are using Windows Server 2003 and SQL Server 2005. So when we pushed out changes to the Live web server we also used Red Gate SQL Compare to push new database changes to the LIVE database. We tested again between the few of us, no problems. Time for bed.
The morning comes and users are starting to hit the app, and BOOM. We have no idea why this would happen as we have not been doing any new types of code things that we were not doing before. However we did notice that during the SQL Compare sync the names of all the foreign keys were different between the two databases, not the IDs in the tables, FK_AssetAsset_A0EB67 to FK_AssetAsset_B67EF8 (for example, don't remember the exact number of trailing mixed characters during the SQL Compare), we are not sure why but that is another variable in this problem.
Strangely once this was all pushed out we could then replicate the errors on QA, but not before everything was pushed to LIVE.
QA and LIVE databases are on the same SQL Server, but the apps are on different instances of Windows Server 2003.
Errors generated:
Index was outside the bounds of the array.
Invalid attempt to call FieldCount when reader is closed.
Server failed to resume the transaction.
There is already an open DataReader associated with this Command which must be closed first.
A transport-level error has occurred when sending the request to the server.
A transport-level error has occurred when receiving results from the server.
Invalid attempt to call Read when reader is closed.
Invalid attempt to call MetaData when reader is closed.
Count must be positive and count must refer to a location within the string/array/collection. Parameter name: count
ExecuteReader requires an open and available Connection. The connection's current state is connecting.
Any one have any idea what the heck could have happened?
EDIT: Since we were able to replicate the errors all of a sudden on QA, it might not be a user load issue... Needless to say we all feel really screwed here.
Concurrency always brings bugs out of the woodwork. I'd recommend you check for objects that could be shared among requests (such as static members and singletons) and refactor your code so that as little as possible is shared.
As far as specifics go, for the error "There is already an open DataReader associated with this Command which must be closed first," you may want to try adding MultipleActiveResultSets=True to your connection strings.
It sounds like you're crossing the streams a bit and trying to share DataContexts across requests. My suggestion would be to wire in a dependancy injection framework that creates a new instance of the dependancy for each request.
I use Castle's IoC and wire it into the controller factory so that when it sees a dependancy on a repository it creates a new instance of that repository for each request. If you go this route let me know and I can shoot you a few more resources.