How can I find why some classic asp pages randomly take a real long time to execute? - sql-server

I'm working on a rather large classic asp / SQL Server application.
A new version was rolled out a few months ago with a lot of new features, and I must have a very nasty bug somewhere : some very basic pages randomly take a very long time to execute.
A few clues :
It isn't the database : when I run the query profiler, it doesn't detect any long running query
When I launch IIS Diagnostic tools, reqviewer shows that the request is in state "processing"
This can happen on ANY page
I can't reproduce it easily, it's completely random.
To have an idea of "a very long time" : this morning I had a page take more than 5 minutes to execute, when it normaly should be returned to the client in less than 100 ms.
The application can handle rather large upload and download of files (up to 2 gb in size). This is also handled with a classic asp script, using SoftArtisan FileUp. Don't think it can cause the problem though, we've had these uploads for quite a while now.
I've had the problem on two separate servers (in two separate locations, with different sets of data). One is running the application with good ol' SQL Server 2000 and the other runs SQL Server 2005. The web server is IIS 6 in both cases.
Any idea what the problem is or on how to solve that kind of problem ?
Thanks.
Sebastien
Edit :
The problem came from memory fragmentation. Some asp pages were used to download files from the server. File sizes could go from a few kb to more than 2 gb. These variations in size induced memory fragmentation. The asp pages could also take quite some time to execute (the time for the user to download the pages minus what is put in cache at IIS's level), which is not really standard for server pages that should execute quickly.
This is what I did to improve things :
Put all the download logic in a single asp page with session turned off
That allowed me to put that asp page in a specific pool that could be recycled every so often (download would now disturb the rest of the application no more)
Turn on LFH (Low Fragmention Heap), which is not by default on Windows 2003, in order to reduce memory fragmentation
References for LFH :
http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx
Link (there is a dll there that you can use to turn on LFH, but the article is in French. You'll have to learn our beautiful language now!)

I noticed the same thing on a classic ASP + ajax application that I worked on. Using Timer, I timed the page load to be 153 milliseconds but in the firebug waterfall chart it randomly says 3.5 seconds. The Timer output is on the response and the waterfall chart claims that it's Firefox waiting for a response from the server. Because the waterfall chart also shows the response, I can compare the waterfall chart to the timer and there's a huge discrepancy 'every so often'

Can you establish whether this is a problem for all pages or a common subset of pages?
If a subset examine what these pages have in common, for example they all use a specific COM dll, that other pages don't.
Does this problem affect multiple clients or just a few?
IOW is there an issue with a specific browser OS version.
Is this public or intranet?
Can you reproduce the problem from a client you own?

Is there any chance there are some full-text search queries going on SQL Server?
Because if so, and if SQL Server has no access to internet, it may cause a 45-second delay every few hours or so when it tries to check the certifications (though this does not apply to SQL Server 2000).
For a detailed explanation of what I'm referring to, read this.

Are any other apps running on your web server? If so, is your problematic in the same app pool as any of them? If so, try creating a dedicated app pool for it. Maybe one of the other apps is having a problem and is adversely affecting yours.

One thing to watch out for is if you have server side debugging turned on in IIS, the web server will run in single threaded mode.
So if you try to load a page, and someone else has hit that url at the same time, you will be queued up behind them. It will seem like pages take a long time to load, but its simply because the server is doling out page requests in a single file line and sometimes you aren't at the front of the line.
You may have turned this on for debugging and forgot to turn it off for production.

Related

Chrome network Timing , how to improve Content Download

I was checking for XHR calls timing in Chrome DevTools to improve slow requests but I found out that 99% of the response time is wasted on content download even though the content size is less than 5 KB and the application is running on localhost(Working on my local machine so no Network issues).
But when replaying the call using Replay XHR menu, the Content download period drops dramatically from 2.13 s to 2.11 ms(as shown in the screen shots below). Data is not cached at browser level.
Example of Call Timing
Same Example Replayed
Can someone explain why the content download timing is slow and how to improve it?
The Application is an ASP.NET mvc 5 solution combined with angularJS.
The Web Server Details:
- Windows Server 2012 R2
- IIS 8
Thank you in advance for your support!
I can't conclusively tell you the cause of this, but I can offer some variables that you can investigate, which might help you figure out what's going on.
Caching
I know you said that the data is not getting cached at the browser level, but I'd suggest checking that again. Because the fact that the initial request takes 2s, and then the repeat request only takes 2ms really does sound like caching.
How to check:
Go to Network panel.
Look at Size column for the request. If you see from memory or from disk cache, it was served from the cache.
Slow development server or machine
My initial thought was that you're doing more work on your development machine than it can handle. Maybe the server requires more resources than your machine can handle. Maybe you have a lot of other programs running and your memory / CPU is getting maxed.
How to check:
Run your app on a more powerful server and see if the pattern persists.
Frontend app is doing too much work
I'm not sure this last one actually makes sense, but it's worth a check. Perhaps your Angular app is doing a crazy amount of JS work during the initial request, and it's maxing out your CPU. So the entire browser is stalling when you make the initial request.
How to check:
Go to Performance panel.
Start recording.
Do the action that causes your app to make the initial request.
Stop recording.
Check the CPU chart. If it's completely maxed out, then your app is indeed doing a bunch of work.
Please leave a comment and let me know if any of these helped.
I have also been investigating this issue on Chrome (currently 91.0.4472.164) as the content download times appear to be vastly different based on the context of the download. When going directly to a resource or attempting to update rendered content as the result of a web call, the content download time can take up to 10x the duration when made from other client applications or when simply saving the data off as a variable in Chrome.
I created a quick, hacky Spring Boot web application that demonstrates the problem that I have made public on github: https://github.com/zielinskin/h2-training-simple
The steps in the readme should hopefully be sufficient to demonstrate the vast performance differences.
I believe that Chrome will need to resolve this performance issue as it has nothing to do with the webserver or ui framework being used.
The "Content Download" includes both the time taken to download the content and also the time for the server to upload the content. You can test out the following cases to see what is the cause. Usually it is a combination of all them.
Case 1: server delay
Assume running server and client on localhost with 0 network delay, and small data.
time0 client receives a response with header content-length = 20
time5 server > client: 10 bytes of data
time5 client receives data
Case 2: network delay
Use hard-coded dummy data to speed up server
time0 client receives a response with header content-length = 20
time0 server > client: 10 bytes of data
time5 client receives data
Case 3: client is too busy
Isolate the query by trying something like curl google.com -v in terminal to access the URL directly. You can use Chrome Dev tool and Firefox Dev tools to copy the request as shown below.

Laravel Session File gigantic

I am using Laravel 5 for my web application,
since running it for over a week, the session are stored as files with over 9MB file size. Instead of the 1kb it used to be.
The CPU is running at 99% all the time and the server is not responding anymore. What causes this enormous file size and what do i need to do to reduce it?
Thanks!
You can play around with the session settings in config/session.php, specifically the lottery setting might help you out.
You can also switch the session driver, if your system is unable to cope with the files. Depending on what you actually store in your sessions and the size of your application, it might be beneficial to switch to a different session driver. Avaiable options can be found here: http://laravel.com/docs/5.1/session#introduction

How to ensure that a bot/scraper does not get blocked

I coded a simple scraper , who's job is to go on several different pages of a site. Do some parsing , call some URL's that are otherwise called via AJAX , and store the data in a database.
Trouble is , that sometimes my ip is blocked after my scraper executes. What steps can I take so that my ip does not get blocked? Are there any recommended practices? I have added a 5 second gap between requests to almost no effect. The site is medium-big(need to scrape several URLs)and my internet connection slow, so the script runs for over an hour. Would being on a faster net connection(like on a hosting service) help ?
Basically I want to code a well behaved bot.
lastly I am not POST'ing or spamming .
Edit: I think I'll break my script into 4-5 parts and run them at different times of the day.
You could use rotating proxies, but that wouldn't be a very well behaved bot. Have you looked at the site's robots.txt?
Write your bot so that it is more polite, i.e. don't sequentially fetch everything, but add delays in strategic places.
Following guidelines set in robots.txt is a good first step. There are tools such as import.io and morph.io. There are also packages/ plugins for servers. For example x-ray; a node.js which have options to assist in quickly writing responsible scrapers e.g. throttle, delays, max connections etc.

Precise tracing of IIS and/or SQL

I am experiencing a performance bottle-neck in this website: http://oceanosdecolor.es/ and I'm not able to find it. If you try, you'll see any page (for example, homepage) takes a long time to load.
The first time you execute a page, the site reloads to detect client device, but that's only the first time and then it keeps client device so it doesn't reload again.
I log traces of the execution to database but I don't get useful information as, according to the log, the execution of the whole homepage happens in the same 1 second, but I can see that the homepage takes more time to load.
The IIS log (when trying locally) doesn't help as this also gives information in seconds, not miliseconds, and again, it says everything happens in the same second, and anyway running locally is much faster than on the server.
So, I ask for help in any tool to monitor performance with more accuracy or any technique I could use.
Thank you
I think your answer might not lie with IIS or SQL Server. According to the Developer Tools in Chrome, your actual page execution and sending out the HTML takes 400ms on first load from my location. The problem is you have a tangle of CSS files (many of which are not being found and causing extremely long delays). Also you have a lot of requests.
I would install Yahoo's YSlow for your favourite browser. This will give you a whole bunch of recommendations for what is running slow on your site from an end-user perspective.
To use the Developer Tools on Chrome: right click on your page, hit "Inspect Element" and then go to the "Network" tab and then hard-refresh your browser (shift-F5).
A few of the problems I see are: commun.css (2.5 seconds and failed), layout.css (400ms and failed), jquery-ui-1.8.10.custom.min.js (800ms and failed).
Find the reason for these failing and fix it and I'm sure your site will load faster. Also try use CSS image sprites wherever possible to cut down on the number of requests.

Optimizing the PDF Export of Huge Reports in Sql Reporting Services 2005

First off I understand that it is a horrible idea to run extremely large/long running reports. I am aware that Microsoft has a rule of thumb stating that a SSRS report should take no longer than 30 seconds to execute. However sometimes gargantuan reports are a preferred evil due to external forces such complying with state laws.
At my place of employment, we have an asp.net (2.0) app that we have migrated from Crystal Reports to SSRS. Due to the large user base and complex reporting UI requirements we have a set of screens that accepts user inputted parameters and creates schedules to be run over night. Since the application supports multiple reporting frameworks we do not use the scheduling/snapshot facilities of SSRS. All of the reports in the system are generated by a scheduled console app which takes user entered parameters and generates the reports with the corresponding reporting solutions the reports were created with. In the case of SSRS reports, the console app generates the SSRS reports and exports them as PDFs via the SSRS web service API.
So far SSRS has been much easier to deal with than Crystal with the exception of a certain 25,000 page report that we have recently converted from crystal reports to SSRS. The SSRS server is a 64bit 2003 server with 32 gigs of ram running SSRS 2005. All of our smaller reports work fantastically, but we are having trouble with our larger reports such as this one. Unfortunately, we can't seem to generate the aforemention report through the web service API. The following error occurs roughly 30-35 minutes into the generation/export:
Exception Message: The underlying connection was closed: An unexpected error occurred on a receive.
The web service call is something I'm sure you all have seen before:
data = rs.Render(this.ReportPath, this.ExportFormat, null, deviceInfo,
selectedParameters, null, null, out encoding, out mimeType, out usedParameters,
out warnings, out streamIds);
The odd thing is that this report will run/render/export if the report is run directly on the reporting server using the report manager. The proc that produces the data for the report runs for about 5 minutes. The report renders in SSRS native format in the browser/viewer after about 12 minutes. Exporting to pdf through the browser/viewer in the report manager takes an additional 55 minutes. This works reliably and it produces a whopping 1.03gb pdf.
Here are some of the more obvious things I've tried to get the report working via the web service API:
set the HttpRuntime ExecutionTimeout
value to 3 hours on the report
server
disabled http keep alives on the report server
increased the script timeout on the report server
set the report to never time out on the server
set the report timeout to several hours on the client call
From the tweaks I have tried, I am fairly comfortable saying that any timeout issues have been eliminated.
Based off of my research of the error message, I believe that the web service API does not send chunked responses by default. This means that it tries to send all 1.3gb over the wire in one response. At a certain point, IIS throws in the towel. Unfortunately the API abstracts away web service configuration so I can't seem to find a way to enable response chunking.
Does anyone know of anyway to reduce/optimize the PDF export phase and or the size of the PDF without lowering the total page count?
Is there a way to turn on response chunking for SSRS?
Does anyone else have any other theories as to why this runs on the server but not through the API?
EDIT: After reading kcrumley's post I began to take a look at the average page size by taking file size / page count. Interestingly enough on smaller reports the math works out so that each page is roughly 5K. Interestingly, when the report gets larger this "average" increases. An 8000 page report for example is averaging over 40K/page. Very odd. I will also add that the number of records per page is set except for the last page in each grouping, so it's not a case where some pages have more records than another.
We narrowed down the large PDF exports from SSRS and found 2 main culprits
1) Unless images are JPG or PNG colour type 3, they are expanded to BMP's See here
2) Unless you configure SSRS to behave otherwise (not recommended), then SSRS will embed fonts or font subsets into the PDF, unless they are one of the 5 'standard' PDF fonts.
Although none of the standard fonts (other than Symbol I guess) are installed on most Windows OS's out of the box, we've found that if you use Times New Roman, Courier New, or Arial then forward and reverse font substitution will take place.
The easiest way to convert your RDL's is to view them as XML and search and replace the FontFamily tags.
If you have to use a non standard font, then, you can still minimize the damage:
Use as few fonts as you can. Search through the RDL XML to make sure there aren't any redundant fonts.
Use TTF fonts if you use different sizes of the font.
Try not to mix normal, bold and italic variants of the font, else it will be embedded multiple times.
Does anyone know of anyway to
reduce/optimize the PDF export phase
and or the size of the PDF without
lowering the total page count?
I have a few ideas and questions:
1. Is this a graphics-heavy report? If not, do you have tables that start out as text but are converted into a graphic by the SSRS PDF renderer (check if you can select the text in the PDF)? 41K per page might be more than it should be, or it might not, depending on how information-dense your report is. But we've had cases where we had minor issues with a report's layout, like having a table bleed into the page's margins, that resulted in the SSRS PDF renderer "throwing up its hands" and rendering the table as an image instead of as text. Obviously, the fewer graphics in your report, the smaller your file size will be.
2. Is there a way that you could easily break the report into pieces? E.g., if it's a 10-location report, where Location 1 is followed by Location 2, etc., on your final report, could you run the Location 1 portion independent of the Location 2 portion, etc.? If so, you could join the 10 sub-reports into one final PDF using PDFSharp after you've received them all. This leads to some difficulties with page numbering, but nothing insurmountable.
3. Does anyone else have any other
theories as to why this runs on the
server but not through the API?
My guess would be the sheer size of the report. I don't remember everything about what's an IIS setting and what's SSRS-specific, but there might be some overall IIS settings (maybe in Metabase.xml) that you would have to be updated to even allow that much data to pass through.
You could isolate the question of whether the time is the problem by taking one of your working reports and building in a long wait time in your stored procedures with WAITFOR (assuming SQL Server for your DBMS).
Not solutions, per se, but ideas. Hope it helps.
Obviously, its a huge report, in fact it's closer to a 1.3 GB database, than a report.
Have you thought of finding a way to split it into multiple pieces and then combine them together? (use one of several different ways to combine PDFs listed on this site.)

Resources