Database locked (or slow) with Linq To Entities and Stored Procedures - sql-server

I'm using Linq To Entities (L2E) ONLY to map all my Stored Procedures in my database for easy translation into objects. My data is not really sensitive (so I'm considering Isolation level "READ UNCOMMITED" everywhere). I have several tables with millions of rows. I have the website and a bunch of scripts utilizing the same datamodel created using entity framework. I have indexed all tables (max 3 for each table) so that every filter I use is directly catched by an index. My scripts mainly consists of
1) Get your from DB (~5 seconds)
2) Making API (1-3 seconds)
3) Adding result in database
I have READ_COMMITTED_SNAPSHOT and ALLOW_SNAPSHOT_ISOLATION set to ON.
Using this strategy most my queries is fixed and very fast (usually). I still have some queries used by my scripts that could run up to 20 second but the are not called that often.
The problem is that suddenly the whole database gets slow and all my queries are returned slowly (can be over 10 seconds). Using Sql Profiler I have tried to find the issue.
As mentioned Im considering NOLOCKS using "READ UNCOMMITED"... Right now I'm going through each possible database call and adding indexes and/or caching tables to make the call faster.
I have also considered removing L2E and accesing the "old way" to be sure thats not the issue. Is My data context looking my tables? it sure looks that way. I have experimenting haveing the context living over the API call to minimize created context but right now I create a new context for each call since I thought it was looking the database.
The problem is that I cannot control, that every single call is made fast, for all eternity otherwise the whole system gets slowed down.
When I restart sql server and rebuild indexes its really fast for a short period of time before everything gets slow again. Any pointers would be appreciated.

Have you considered checking if there are any major waits on the server.
Review the following page on sys.dm_os_wait_stats (Transact-SQL) and see if you can get any insight into why the server is slow....

Related

How to compare Accesss SQL result to SQL Server results efficiently?

I am trying to convert my Access query to SQL views. I can see how many records are returned in both cases easily. But it is getting difficult to make sure the record matches as I have to check it manually. Also I don't check every single record so there might be some conversion error that I did not notice.
Is there a way to check at least some segment of the result automatically?
In a way, you are asking the wrong question.
When you migrate the data to sql server, then at that point you assumed all data made it to sql server. I suppose you could do a one-time check of the record counts.
At that point, you then link the tables (that likely pointed to an access back end) to now pointing to sql server.
At this point, all of the existing queries in Access should and will work fine. ZERO need exists to test and check this after a migration.
For existing forms that are bound to a table (a linked table), then you DO NOT NEED to create views on sql server and this is simply not done nor required.
The ONLY time you will convert an access query to a view is if the access query runs slow.
For a simple query say on a table of 1 million invoices, you can continue to use the Access query (that is running against the linked tables). If you do a query such as
Select * from tblInvoices where invoiceNum = 12345
Then Access will ONLY pull the one record down the network pipe. So ZERO need and performance gains will occur by converting this query to a view (do not do this!!! – you are simply wasting valuable time and money).
However, for a complex query with multiple joins, and multiple tables (which are NOT in general used for a form), then you do want to convert such quires to a view, and then given the view the SAME name as what the access (client side) query had.
So you not “out of the blue” going to wind up with a large number of views on sql server after a migration.
You migrate the data, link the tables. At that point 99% of your application will work just fine as before. After I migration you DO NOT NEED TO create any views.
Creating views is a “after the fact” and occurs AFTER you have the application running. So views are NOT created until such time you have the application up and running. (And deployed).
To get your access application to work with sql server you DO NOT create ANY views. You do not even create ONE view.
You migrate the data, link the tables, and now your application should work.
You ONLY start creating views for access queries that run slow.
So for example, you might have a report that now runs slow. The reason is that while access does a good job for most existing quires (that you don’t have to change), for some, you find that the access client does a “less than ideal” job of running that query.
So you flip that query in access into sql view mode, cut + paste that sql into the sql manager, (creating a view). You then modify the syntax of the query. For some sql, it might be simply less effort to bring up the query in the access designer, and then bring up the sql designer in sql server, and simply re-create the sql from scratch in place of the cut + paste idea. (Both approaches work fine – it depends on how “crazy” the sql is you are working with as to which approach you find works better (try both – see what suits you best). Some like to cut+psate the sql, and others like to simply look at the graphical designer in both cases (on two monitors) and work that way.
At this point, after you created the sql query (the view), you simply run that view, and look at the record count. You then run the query that you have on the access client, and if it returns the same number of records, then in 999.9% of cases, you can be rather confident that the sql query (the view) produces the same results as the client query. You now delete (or re-name) the access query, link to the view (giving it the same name as the access query, and you are now done.
The 2, or 20 places that used that access client query will now be using the view. You should not have to worry or modify one thing in Access after having done this.
Now that the view is working, you then re-deploy your access front end to all workstations (I assume you have some means to update the access application part – just like you did even when not using sql server).
At this point, users are all happy and working.
If in the next day or so, you now find another report that runs slow (most will not), then that access query becomes a possible choice by you to convert to a view.
So out an application of say 200 quires in access, you will in general likely only have to convert 2-10 of them to a views. The VAST majority of the access saved queries DO NOT AND SHOULD NOT be converted to views. It is a waste of time to convert an access query that runs perfectly well and performs well to a view.
So your question suggests that you have to convert a “lot” of Access quires to views. This is NEVER the case nor is it a requirement.
So the amount of times that you need to convert an access query to a view is going to be quite rare. Most forms are based on a table. Or now a linked table to sql server. In these cases you do NOT want to convert to a view, and you don’t need to.
So the answer here is simply run the access query, and then run the view. They are both working on the same data (you using linked tables and the access queries will HIT those linked tables and work just fine). If the access client using the linked tables to sql server returns the same number of records as your view on sql server, then as noted, you are 999.8% done, and you are now safe to re-name the access query, and link to the view with the same name.
However, the number of times you do this after a migration is VERY low and VERY rare.
You don’t convert any existing access saved query to a view unless running that query in access is taking excessive time.
As noted, for “not too complex” sql quires (ones that don’t involve joins between too many tables), you will not see nor find a benefit by converting that access query to a view. You KEEP and use the existing Access quires as they are.
You only want to spend time on the “few” queries that run slow, the rest do NOT need to be modified.
For an existing form that is bound to a table (or now a linked table), such forms are NOT to be changed to views – they are editing tables directly, (or linked tables), and REALLY REALLY need to remain as such. Attempting to covert such forms using a linked table to a view is HUGE mess, and something that will not help performance, but worse only introduce potential 100’s of issues and bugs into your existing application, and you doing something that is NOT required.
Because you will create so VERY FEW views, then a automated approach is not required. You simply make the one query, run it, and then run the existing access query (client side). If they both return the same number of reocrds, then it is a cold chance in hell day that you need further testing. You going to be creating one view at a time, and creating these views over a "longer" period of time. You NEVER migrate access and then migrate a large number of queries to sql server. That is NOT how a migration works.
You migrate, link the tables. At that point your application should work fine. You get it working (and not yet have created any views). You get the application deployed. ONLY after you have everything working just fine do you THEN start to introduce views into the mix and picture. As noted, because you only ever going to work with introducing ONE view into the application, and this will occur "over time" while users are using the existing and working application, then little need for some automated approach and test is required. As I stated, with 200+ access client queries, you should not need to convert more then about 5, maybe 10 to views - the rest are to remain as they are now and untouched and unmodified by you.

What are these sys.sp_* Stored Procedures doing?

I'm tracking down an odd and massive performance problem in my SQL server installation. On my system, a particular stored procedure takes 2 minutes to execute; on a colleague's system it takes less than 1 second. We have similar databases/data and configurations, but there's obviously something very different.
I ran the SP in question through the Profiler on both systems and noticed something odd. On My system, I see 9 entries with the following properties:
The Duration is way high relative to other rows. I have values as high as 37,698 and as low as 1734. On the "fast" system the maximum duration (for the entire SP call) is 259.
They are executed for two databases related to the one that contains the SP I'm running. (This SP makes calls via Linked Servers to these two databases).
They are executions of one of the following system SPs:
sp_tables_info_90_rowset
sp_check_constbytable_rowset
sp_columns_90_rowset
sp_table_statistics2_rowset
sp_indexes_90_rowset
I can't find any Googleable documentation on what these are, why they would be so slow, or why they would run on one system but not the other. Does anyone know what they're all about?
Try manually updating statistics on that table.
UPDATE STATISTICS [TableName]
Then double check that the database option to AutoUpdateStatistics is TRUE. Even if it is, though, I've seen cases where adding large amounts of data to a table doesn't always cause the statistics to update in a timely way, and queries can be slow.
I don't know the answer to your question. But to try to fix the problem you're having (which, I assume, is what you're actually interested in), the first thing I'd do is run a re-index on the tables you're querying. This frequently will fix any kind of slowness when the conditions are as you described (same database structure, different data/database, same query).
These are the tables created when you have linked server calls. These are called work tables created in Tempdb. They are automatically created by the database engine for temporary operations like Spooling etc.
Those sp's mean your query is hitting linked servers by using synonyms. This should be avoided whenever possible.
I'm not familiar with those specific procedures, but you can try running:
SELECT object_definition(object_id('Procedure Name'))
To get a better idea of what's going on under the hood.
Last index rebuild? Last statistics update?
Otherwise, these stored procs are used by the SQL Server client too... no? And probably won't cause these errors

Saving / Caching Stored Procedure results for better performance? (SQL Server 2005)

I have a SP that has been worked on my 2 people now that still takes 2 minutes or more to run. Is there a way to have these pre run and stored in cache or somewhere else so when my client needs to look at this data in a web browser he doesn't want to hang himself or me?
I am no where near a DBA so I am kind of at the mercy of who I hire to figure this out for me, so having a little knowledge up front would really help me out.
If it truly takes that long to run, you could schedule the process to run using SQL Agent, and have the output go to a table, then change the web application to read the table rather than execute the stored procedure. You'd have to decide how often to run the refresh, and deal with the requests that occur while it is being refreshed, but that can be dealt with as well by having two output files, one live and one for the latest refresh.
But I would take another look at the procedure, look at the execution plan and see where it is slow, make sure it is not doing full table scans.
Preferred solutions in this order:
Analyze the query and optimize accordingly
Cache it in the application (you can use httpRuntime.Cache (even if not asp.net application)
Cache SPROC results in a table in the DB and then add triggers to invalidate the cache (delete the table) so a a call to the SPROC would first look to see if there is any data in the cache table. If none, run SPROC and store the result in the cache table, if so, return the data from that table. The triggers on the "source" tables for the SPROC would just delete * from CacheTable to "clear the cache" (depending on what you sproc is doing and its dependencies, you may even be able to partially update the cache table based on the trigger, but all of this quickly gets difficult to maintain...but sometimes you gotta do what you gotta do...This approach will allow the cache table to update itself as needed. You will always have the latest data and the SPROC will only run when needed.
Try "Analyze query in database engine tuning advisor" from the Query menu.
I usually script the procedure to a new window, take out the query definition part and try different combinations of temp tables, regular tables and table variables.
You could cache the result set in the application as opposed to the database, either in memory by keeping an instance of the datatable around, or by serializing it to disk. How many rows does it return?
Is it too long to post the code here?
OK first things first, indexes:
What indexes do you have on the tables and is the execution plan using them?
Do you have indexes on all the foreign key fields?
Second, does the proc use any of the following performance killers:
a cursor
a subquery
a user-defined function
select *
a search criteria that starts with a wildcard
third
Can the where clause be rewritten to be sargeable? There is more than one way to write almost everything and some ways are better performers than others.
I suggest you buy your developers some books on performance tuning.
Likely your proc can be fixed, but without seeing the code, it is hard to guess what the problems might be.

SpeedUp Database Updates

There is a SqlServer2000 Database we have to update during weekend.
It's size is almost 10G.
The updates range from Schema changes, primary keys updates to some Million Records updated, corrected or Inserted.
The weekend is hardly enough for the job.
We set up a dedicated server for the job,
turned the Database SINGLE_USER
made any optimizations we could think of: drop/recreate indexes, relations etc.
Can you propose anything to speedup the process?
SQL SERVER 2000 is not negatiable (not my decision). Updates are run through custom made program and not BULK INSERT.
EDIT:
Schema updates are done by Query analyzer TSQL scripts (one script per Version update)
Data updates are done by C# .net 3.5 app.
Data come from a bunch of Text files (with many problems) and written to local DB.
The computer is not connected to any Network.
Although dropping excess indexes may help, you need to make sure that you keep those indexes that will enable your upgrade script to easily find those rows that it needs to update.
Otherwise, make sure you have plenty of memory in the server (although SQL Server 2000 Standard is limited to 2 GB), and if need be pre-grow your MDF and LDF files to cope with any growth.
If possible, your custom program should be processing updates as sets instead of row by row.
EDIT:
Ideally, try and identify which operation is causing the poor performance. If it's the schema changes, it could be because you're making a column larger and causing a lot of page splits to occur. However, page splits can also happen when inserting and updating for the same reason - the row won't fit on the page anymore.
If your C# application is the bottleneck, could you run the changes first into a staging table (before your maintenance window), and then perform a single update onto the actual tables? A single update of 1 million rows will be more efficient than an application making 1 million update calls. Admittedly, if you need to do this this weekend, you might not have a lot of time to set this up.
What exactly does this "custom made program" look like? i.e. how is it talking to the data? Minimising the amount of network IO (from a db server to an app) would be a good start... typically this might mean doing a lot of work in TSQL, but even just running the app on the db server might help a bit...
If the app is re-writing large chunks of data, it might still be able to use bulk insert to submit the new table data. Either via command-line (bcp etc), or through code (SqlBulkCopy in .NET). This will typically be quicker than individual inserts etc.
But it really depends on this "custom made program".

SQL Server Maintenance Suggestions?

I run an online photography community and it seems that the site draws to a crawl on database access, sometimes hitting timeouts.
I consider myself to be fairly compentent writing SQL queries and designing tables, but am by no means a DBA... hence the problem.
Some background:
My site and SQL server are running on a remote host. I update the ASP.NET code from Visual Studio and the SQL via SQL Server Mgmt. Studio Express. I do not have physical access to the server.
All my stored procs (I think I got them all) are wrapped in transactions.
The main table is only 9400 records at this time. I add 12 new records to this table nightly.
There is a view on this main table that brings together data from several other tables into a single view.
secondary tables are smaller records, but more of them. 70,000 in one, 115,000 in another. These are comments and ratings records for the items in #3.
Indexes are on the most needed fields. And I set them to Auto Recompute Statistics on the big tables.
When the site grinds to a halt, if I run code to clear the transaction log, update statistics, rebuild the main view, as well as rebuild the stored procedure to get the comments, the speed returns. I have to do this manually however.
Sadly, my users get frustrated at these issues and their participation dwindles.
So my question is... in a remote environment, what is the best way to setup and schedule a maintenance plan to keep my SQL db running at its peak???
My gut says you are doing something wrong. It sounds a bit like those stories you hear where some system cannot stay up unless you reboot the server nightly :-)
Something is wrong with your queries, the number of rows you have is almost always irrelevant to performance and your database is very small anyway. I'm not too familiar with SQL server, but I imagine it has some pretty sweet query analysis tools. I also imagine it has a way of logging slow queries.
I really sounds like you have a missing index. Sure you might think you've added the right indexes, but until you verify the are being used, it doesn't matter. Maybe you think you have the right ones, but your queries suggest otherwise.
First, figure out how to log your queries. Odds are very good you've got a killer in there doing some sequential scan that an index would fix.
Second, you might have a bunch of small queries that are killing it instead. For example, you might have some "User" object that hits the database every time you look up a username from a user_id. Look for spots where you are querying the database a hundred times and replace it with a cache--even if that "cache" is nothing more then a private variable that gets wiped at the end of a request.
Bottom line is, I really doubt it is something mis-configured in SQL Server. I mean, if you had to reboot your server every night because the system ground to a halt, would you blame the system or your code? Same deal here... learn the tools provided by SQL Server, I bet they are pretty slick :-)
That all said, once you accept you are doing something wrong, enjoy the process. Nothing, to me, is funner then optimizing slow database queries. It is simply amazing you can take a query with a 10 second runtime and turn it into one with a 50ms runtime with a single, well-placed index.
You do not need to set up your maintenance tasks as a maintenance plan.
Simply create a stored procedure that carries out the maintenance tasks you wish to perform, index rebuilds, statistics updates etc.
Then create a job that calls your stored procedure/s. The job can be configured to run on your desired schedule.
To create a job, use the procedure sp_add_job.
To create a schedule use the procedure sp_add_schedule.
I hope what I have detailed is clear and understandable but feel free to drop me a line if you need further assistance.
Cheers, John

Resources