Classic ASP pages using COM object are extremely slow - sql-server

First of all, I'd like to say upfront that some of the details on this question may be a little vague as it relates to commercial code that I obviously cannot divulge here. But anyway, here goes.
I am currently dealing with a .Net based web application that also has pages of classic ASP in certain areas - this application has a backend MSSQL database. The data access for the classic ASP areas of the application are handled by a 32 bit DLL added into Component Services on the IIS server. The application has recently been changed so all components (IIS authentication, .Net Application Pool and a data access DLL added to Component Services) now run under a single Windows account.
The problem I have found recently is that whilst .Net data access to the database is running at a perfectly normal speed, the classic ASP data access that goes via the COM+ component is extremely slow. Unfortunately, this doesn't seem to be throwing any errors anywhere, not in IIS logs and not in Event Viewer.
The COM+ component is instantiated in a standard fashion and this is done in an include file that is referenced on any ASP page needing data access, eg.
var objDataProvider = Server.CreateObject("DataAccess.DataProvider");
The ASP pages then use methods in the COM+ to execute queries such as;
objDataProvider.Query(...)
objDataProvider.Exec(...)
I have enabled failed request tracing in IIS for any ASP pages that are taking longer than 20s to process and can see that the majority of the calls are dealing with record sets as seen in the examples below.
if(rsri.EOF)
and
theReturn = rsData(0);
The two examples above both take over 9s to run. Profiling the SQL that runs in the background shows this runs in a matter of milliseconds which suggests the problem lies with the COM+ returning the data back to the ASP OR the ASP being slow to process it.
Windows Server 2008 R2 SP1 (64 bit)
Intel Xeon CPU
2GB RAM
Has anyone experienced a similar situation to this before or able to point me in the direction of any further diagnostics that might help?

After days of messing around with this, I've finally found the answer and it's the simplest of issues solved by going back to basics.
The web server hosting the application was in a DMZ and required a hosts file entry for the SQL server. Once this was added, the application was flying along again.

You'll have to go deeper by yourself. I had used before Ants Performance Profiler to improve the performance of .Net applications. It seems it can profile COM+ as well.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/walkthrough
This or some other similar profiling tool is the only option I see to solve your problems.

I'm on the same issue. I have migrated a iis server from W2003-SQL express 2005 to W2012R2-SQL Server 2014 express and the asp scripts are 2 times slower.
The sql querys inside are executed with the same or better speed and I test the server locally to discard internet bandwith problems. And I think it may be a dns issue.
What change had you done in hosts file? and, What sql string connection or connection type had you used?
Thanks

Related

VB.NET and SQL Server Express application deployment

I have developed an application using VB.NET and used SQL SERVER Express as the database back end.
The application has 5 user profiles.(Each user profile provides different services).
Deployment reqiurements :
The application is to be deployed on a LAN with 10-20 machines.
Any user profile can be accessed from any machine.
Any changes to the database entries should be reflected on all machines.
I am confused about how I should achieve this deployment. According to my research :
1.The database should be deployed on one machine . This machine will acts as the database server .
My problem(s) :
I am familiar with accessing databases on local machine but how to access a remote database?.
Is the connection string the only thing that needs to be addressed or are there any other issues too?
Do I need to install SQL SERVER on all machines or only on the server machine ?
Do I have to deal with concurrency issues (multiple users accessing/modifying same data simultaneously) or is it handled by the database engine?
2.The application can be deployed in 2 ways :
i. Storing the executable on a shared network drive on the server.Providing shortcut on desktop of each machine.
ii. Storing the executable itself on each machine.
My Problem(s) :
How does approach 1 work ? (One instance of an executable running on multiple machines ? :s)
In approach 2 , will the changes in database entries be reflected on all machines appropriately?
In approach 2, if there are changes to the application , is there any method to update it on all machines ? ( Other than redeploying it on each machine )
Which approach is preferable?
Do I need to install the .NET framework all machines?
Will I have to make any other system changes ( firewall,security,permissions) ?
If given a choice to install the operating system on each machine ,which version of windows is preferable for such an application environment ?
This is my first time deploying a multi-user database application on a network.I'll be very grateful for any suggestions/advice,references,etc.
Question 1: You will need to create SQL Server 'roles' for each of your 'profiles'. A given user will be assigned one or more or those 'roles'. Each of your tables, views, stored procedures, and triggers will need to be assigned one or more roles. This is a messy business, this is why DBAs get paid lots of money to lounge around most of the time (I'm kidding, don't vote me down).
Question 2: If you 'remote in' to a server, you'll get the server screens, which are quite a bit duller than the workstation presentation. Read up on 'One Click', this gives you the ability to detect an updated application on a host, and automatically deploy the update to the user's machine. This gets rid of the rather messy business of running around to 20 machine installing upgrades every time you fix something.
As you have hands-on access to all the machines your task is comparatively simpler.
Install SQL Express on your chosen db server. You should disable the 'hide advanced options' in the installer; this will allow you to enable TCP/IP and the SQL Browser service; also you may want mixed-mode authentication - depends on your app and whether the network is domain or peer-to-peer. The connection string will need to be modified as you are aware; also the default configuration of Windows firewall on the server will block access to the db engine - you will need to open exceptions in the firewall for the browser service and SQL server itself. Open these as exceptions for the exes, not as port numbers etc. Alternatively, if you have a firewall between your server and the outside world, you may decide to just turn off the firewall on the server, at least on a temporary basis while you get it working.
No, you don't need to install any SQL Server components on the workstations.
Concurrency issues should be handled by your application. I don't want to be rude but if you are not aware of this maybe you are not yet ready for deploying your app to production. Exactly what needs to be done about concurrency depends on both the requirements of your application and the data access technology you are using. If your application will be used mostly to enter new records and then just read them later, you may get away without too much concurrency-handling code; it's the scenario where users are simultaneously editing existing records where the problems arise - but you need to have at least basic handling in place.
Re where to locate the client exe - either of your suggestions can work. Simplest is local installation on each machine using an .msi file; you can place a master copy of the msi on the server. You can do stuff with login scripts, group policies, etc, or indeed clickonce. To keep it simple at this stage I would just install from an .msi onto each machine - sounds like you have enough complexity do get your head around already.
One copy of the exe on the server can be handled in a more sophisticated manner by Terminal Server Citrix, etc.
Either way assuming your app works correctly, yes all changes will be made against the same db and visisble to all workstations.
Yes you will need .net framework on all machines - however, it may very well already be there. Different versions of Windows came with different versions of the Fx built-in and or updated via Windows Update; also of course it depends which ver you built your exe against.
Right I hope there is something helpful in that lot. Good luck.

Establishing SQL Connection Taking 10 - 15 Seconds

We are having some strange performance issues and I was hoping somebody may be able to point us in the right direction. Our scenario is an ASP.NET MVC C# website using EF4 POCO in IIS 7 (highly specced servers, dedicated just for this application).
Obviously it's slow on application_startup which is to be expected, but once that has loaded you can navigate the site and everything is nice and snappy 0.5ms page loads (we are using Mini-Profiler). Now if you stop using the site for say 5 - 10 minutes (we have the app pool recycle set to 2 hours and we are logging so we know that it hasn't been recycled) then the first page load is ridiculously slow, 10 - 15 seconds, but then you can navigate around again without issue (0.5ms).
This is not SQL queries that are slow as all queries seem to work fine after the first page hit even if they haven't been run yet so not caching anywhere either.
We have done a huge amount of testing and I can't figure this out. The main thing I have tried so far is to Pre generate EF views but this has not helped.
It seems after looking at Sql Server Profiler after 5 minutes give or take 30 seconds with no activity in Sql Server Profiler and no site interaction a couple of "Audit Logout" entries appear for the application and as soon as that happens it then seems to take 10 - 15 seconds to refresh the application. Is there an idle timeout on Sql Server?
This should work if you use the below in your connection string:
server=MyServer;database=MyDatabase;Min Pool Size=1;Max Pool Size=100
It will force your connection pool to always maintain at least one connection. I must say I don't recommend this (persistant connection) but it will solve your problem.
Are you using LMHOSTS file? We had same issue. LMHOSTS file cache expires after 10 minutes by default. After system has been sitting idle for 10 minutes the host would use Broadcast message before reloading the LMHOSTS file causing the delay.
It seems after looking at Sql Server Profiler after 5 minutes give or
take 30 seconds with no activity in Sql Server Profiler and no site
interaction a couple of "Audit Logout" entries appear for the
application and as soon as that happens it then seems to take 10 - 15
seconds to refresh the application. Is there an idle timeout on Sql
Server?
This is telling me that the issue most likely lies with your SQL server and/or the connection to it rather than with your application. SQL server uses connection pooling and SQL will scavange these pools every so often and clean them up. The delay you appear to be experiencing is when your connection pools have been cleaned up (the audit logouts) and you are having to establish a new connection. I would talk/work with your SQL database people.
For testing, do you have access to a local/dev copy of the database not running on the same SQL server as your production app? If not, try and get one setup and see if you suffer from the same issues.
I would run "SQL Server Profiler" against SQL Server and capture a new trace while reproducing the problem by accessing the site after being idle 5-10 mins. After that look at the trace. Specifically, look for entries in which ApplicationName starts with "EntityFramework...". This will tell you what is EF doing at that moment. There could be some issue with custom caching on top of EF, or some session state that is expiring (check sessionState timeout in web.config)
Since it is the first run (per app?) that is slow, you may be experiencing the compilation of the EDMX or the LINQ into SQL.
Possible solutions:
Use precompiled views and precompiled queries (may require a lot of refactoring).
http://msdn.microsoft.com/en-us/library/bb896240.aspx
http://blogs.msdn.com/b/dmcat/archive/2010/04/21/isolating-performance-with-precompiled-pre-generated-views-in-the-entity-framework-4.aspx
http://msdn.microsoft.com/en-us/library/bb896297.aspx
http://msdn.microsoft.com/en-us/magazine/ee336024.aspx
Dry run all your queries on app start (prior to first request is received).
You can run queries with a fake input (e.g. non existing zero keys) on a default connection string (can be to an empty database). Just make sure you don't throw exceptions (use SingleOrDefault() instead of Single() and handle null results and 0-length list results on .ToList()).
Create a simple webpage that accesses the SQL Server with a trivial query like "Select getDate()" or some other cheap query. Then use an external service like Pingdom or other monitor to hit that page every 30 seconds or so. That should keep the connections warm.
Try to overwrite your timeout in the web.config like in this example:
Data Source=mydatabase;Initial Catalog=Match;Persist Security Info=True
;User ID=User;Password=password;Connection Timeout=120
If it works, this is not a solution..just a work around.
In our case the application was hosted in Azure App Service Plan and was having similar problem. Turned out to be a problem of not configuring virtual network. See the question/answer here - EF Core 3.1.14 Recurring Cold Start

Which server platform to choose: SQL Azure or Hosted SQL Server for new project

We're getting ready to build a new platform for our current system. Currently we install sql server express locally to all our clients and all their data is stored there. While the process works pretty good, it's still a pain to add columns/tables etc. We also want to have our data available outside of the local install. So we're moving to a central web based sql database and creating a web based application. Our new application will be a Silverlight 5, wcf ria services, mvvm, entity framework application
We've decided that either a web hosted sql server database or sql azure database are the way to go. However, I have no idea why I would choose one over the other. The limitations of azure don't seem to apply to us, but our application will be run on our current shared web host. Is it better to host the application on the same server as the database? Do we even know with shared web hosting that the server is on the same location as the app? There's also the marketing advantage of being 'in the cloud' which our clients love when we drop that word (they have no idea about anything technical, it's just a buzzword for them). I'm not too worried about the cost as I think both will ultimately be about the equivalent of each other.
I feel like I may be completely overthinking this and either will work, however I'd like to try and get the best solution for us and don't want to choose without getting some feedback.
In case it helps, our application is mostly dashboard/informational data. Mostly financial and trending data. It's almost entirely read only. Sometimes the data can get fairly large and we would be sending upwards of 50,000 rows of data to the application.
Thanks for any help/insight you can provide for me!
The main concerns I would have with using a SQL Azure DB from an application on your current shared web host would be
The effect of network latency: Depending on location, every time you do a DB round trip from your application to the SQL Azure DB you will incur a 50-100ms delay. If your application does lots of round trips, this will mount up. Often, if an application has been designed to work with a DB on the LAN (you use of local client DBs suggests this) the they tend to get "chatty" since network delays are very small on the LAN. You may find your application slows down significantly.
Security: You will have to open up the SQL Azure firewall to the IP address(es) that your application presents when querying. Depending on your host, it may be that this IP address is shared between several tenants. This would be a vulnerability.
If neither of these is a problem, then SQL Azure will provide a much lower management overhead (e.g. no need to patch etc.) and will give you very high reliability, especially in terms of the risk of data loss.

Is it possible to have an Access back-end database available for multiple users on the same network?

I am developing a Visual Basic .NET application to be used by the staff of a small training centre nearby. The front-end (UI, menus, etc.) will all be in VB .NET, and there will be a back-end database for storing all of the required data, such as student records and meeting information.
What I would like to know is if it's possible to use a Microsoft Access database for this purpose, and have it accessible by all the staff in the centre (on the same network) at the same time. For example, would I be able to put the database in a shared network folder, and have a copy of the VB application on each PC that would all be able to read/edit/add to the database?
Advice would be appreciated as to how I should proceed. (Note: I would really prefer a method of doing this with MS Access as opposed to suggestions to switch to SQL, as Access was the requested platform)
Thanks in advance.
Yes it can be done and from a programming stand point it is any (much) different then using SQL Server. I think the biggest considerations you have to think about are:
How many simultaneous users do you expect to have using the application?
How secure does the application need to be? Is Access security enough?
How big do I expect the database to become in the next 1 to 5 years?
I think those are you biggest considerations when using Access as a data store and if your answers fall within the specs of Access capabilities then go for it. You can always migrate to SQL Server at a later time if you run into the limits of Access.
You did not mention the version of Access that you are using but a quick Google/Bing search should return specs for every version available.
Yes, but probably not advisable. Despite the disclaimer in your post, you should try to convince the powers to be to look at SQL Server Express instead-- it's free.
But, if Access is the database, all you need to do is have the database reside on a shared directory with full read-write capabilities for all the users. Hopefully when you say "staff of a small training centre", you mean it.
Install the VB.Net program on the client computers and setup the connection string with the path to the database.
Someone else with more recent Microsoft Access experience can probably give better hints on how to reduce the corruption factor. My own experience was to stay away from queries in Access-- have the Access database only for tables and do all of your queries with SQL statements in your client code. My corrupted databases reduced dramatically when I did that, but that was 10-15 years ago.
Back up the database religiously.
Yes, just make sure you chane the extension of your back end access db to your_database_name.be_accdb and it will start logging once the user start writing to it. But I recommend SQL sever

Running SQL Server on the Web Server

Is it good, bad, or indifferent to run SQL Server on your webserver?
I'm using Server 2008 and SQL Server 2005, but I don't think that matters to this question.
For small sites, it doesn't make a bit of a difference.
As the load grows, though, this scales really badly, and quicker than you think:
Database servers are built on the premise they "own" the server. They trade memory for speed and they easily use all available RAM for internal caching.
Once resources start to be scarce, profiling becomes very difficult -- it is clear that IIS and SQL are both suffering, less clear where the bottleneck is. IIS needs CPU, SQL Server needs RAM or CPU etc etc
No matter how many layers you put in your code, it all runs on the same CPU, therefore a single layered application will run better in this context -- less overhead -- but it will not scale.
Security is really bad, usually you isolate SQL behind a firewall!
If you can afford it, it's probably better to shell out a few bucks and get a second server, maybe using PostgreSQL. One IIS server and one PostgreSQL cost about as much as on IIS + SQL Server because of licensing costs...
Larger shops would probably not consider this a best practice... However, if you aren't dealing with hundreds of requests per second, you're fine putting them both on one box.
In fact, for small apps, you will see better performance on the back-end because data does not have to go across the wire. It's all about scale.
Keep in mind that database servers eat memory. Here's one important lesson from the school of hard knocks: if you decide to run SQL Server 2005 on the same machine as your web server (and that is the setup you mentioned in your question), make sure you go into Sql Server Management Studio and do this:
Right click on the server instance and click properties
Select 'memory' from the list on the left
Change 'maximum server memory' to something your server can sustain.
If you don't do that, SQL Server will eventually take up all of your server's RAM and hang onto it indefinitely. This will cause your server to more or less sputter and die. If you are not aware of this, it can be very frustrating to troubleshoot.
I've done this quite a few times. It's not something you would do if you had the infrastructure of a large corporation and it does not scale, but it's fine for a lot of things.
It really comes down to how much work your webserver and your sql server are doing.
Without more information I doubt you are going to get any helpful answers.
If your web server is publicly accessible, this is a VERY bad idea from a security perspective.
Although it makes a lot of things more difficult from a routing, firewall, ports, authentication, etc. perspective, separation is good. When you have your database server running on the web server, if your web server is compromised, then your sql server is, too.
When you have them on separate boxes, you've raised the bar a little.
There's still a lot more work to be done to secure your web server AND your database server, but why make it easier than it needs to be?
I'd say it was best to run them on the same server until it becomes a problem. That way you'll save yourself some money and time upfront. Once the site becomes a success and requires a some architectural changes it should have already paid for itself.
Remember to back up :)
It will depend on the expected load of the server. For small sites, it is no problem at all (if correctly configured). For large sites, you might want to consider distributing the load over different servers: web server, file server, database server, etc.
I've seen this issue over and over again. The right answer is to put SQL Server on one machine and IIS (web server) on the other. Your money will go into the SQL Server machine because the right drive system and RAM must be purchased to support a efficient server but the web server can be a much scaled down & less expensive machine with just a mirrored drive set.

Resources