Unable to Upload Large File in ASP.Net Web Application (DotNetNuke), Web.Config is set for size and execution - file

I am a MS Azure VM user and am trying to get help with an issue where even thought in my Web.Config I have set the correct file size/request (set to 128MB) and execution time (9000), when I upload a file (just 32MB) via the asp.net file upload control in it fails with a 403 error (file not found). I have tried everything and am stumped. I am MCSD and MCSE so I know my way around so I am wondering if it is a Azure VM issue/Configuration item I am missing. Any assistance finding a solution would be great. This is a critical issue that is preventing a software sale from going through and I really need to find a fix before they decide to move away from our solution. We are a small startup so paying MS $$$ for support is something we are trying to avoid if at all possible... Thank you in advance for your assistance...

I suspect you are running into an issue with the size of the ASPNETTEMP folder. See http://blogs.msdn.com/b/kwill/archive/2011/07/18/how-to-increase-the-size-of-the-windows-azure-web-role-asp-net-temporary-folder.aspx for a complete solution showing how to configure an Azure webrole to upload large files via the fileupload control.
Also, in terms of paying for support, it is only $29 per month. If you spend even 30 minutes trying unsuccessfully to troubleshoot something yourself, then the price of support will pay for itself. Just something to think about...

Related

Host server bloating with numerous fake session files created in hundreds every minute on php(7.3) website

I'm experiencing an unusual problem with my php (7.3) website creating huge number of unwanted session files on server every minute (around 50 to 100 files) and i noticed all of them having a fixed size of 125K or 0K in cPanel's file manager, hitting iNode counts going uncontrolled into thousands in hours & hundred thousands+ in a day; where as my website really have a small traffic of less than 3K a day and google crawler on top it. I'm denying all bad bots in .htaccess.
I'm able to control situation with help of a cron command that executes every six hours cleaning all session files older than 12hours from /tmp, however this isn't an ideal solution as fake session files getting created great in number eating all my server resources like RAM, Processor & most importantly Storage getting bloated impacting overall site performance.
I opened many of such files to examine but found them not associated with any valid user as i add user id, name, email to session upon successful authentication. Even assuming a session created for every visitor (without acc/login), it shouldn't go beyond 3K on a day but sessions count going as high as 125.000+ just in a day. Couldn't figure out the glitch.
I've gone through relevant posts and made checks like adding IP & UserAgent to sessions to track suspecious server monitoring, bot crawling, overwhelming proxy activities, but with no luck! I can also confirm by watching their timestamps that there is no human or crawler activity taken place when they're being created. Can see files being created every single minute without any break throughout the day!!.
Didn't find any clue yet in order to figure out root cause behind this weird behavior and highly appreciate any sort of help to troubleshoot this! Unfortunately server team unable to help much but added clean-up cron. Pasting below content of example session files:
0K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871384
125K Sized> favourites|a:0:{}LAST_ACTIVITY|i:1608871395;empcontact|s:0:"";encryptedToken|s:40:"b881239480a324f621948029c0c02dc45ab4262a";
Valid Ex.File1> favourites|a:0:{}LAST_ACTIVITY|i:1608870991;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";applicant|1;applicant_name|s:4:Raju;
Valid Ex.File2> favourites|a:0:{}LAST_ACTIVITY|i:1608919741;applicant_email|s:26:"raju.mallxxxxx#gmail.com";applicant_phone|s:11:"09701300000";IP|s:13:"13.126.144.95";UA|s:92:"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:82.0) Gecko/20100101 Firefox/82.0 X-Middleton/1";applicant|N;applicant_name|N;
We found that the issue triggered following hosting server's PHP version change from 5.6 to 7.3. However we noticed unwanted overwhelming session files not created on PHP 7.0! It's same code base we tested against three versions. Posting this as it may help others facing similar issue due to PHP version changes.

"Respawning" Files in Cpanel

So straight to the point- Im trying to clean my host entirely (databases too) and after I delete the last 2 files wp-content and wp-includes (700MB of files) they get restored instantly. This may be a simple question but for me it s very odd and I don`t get it. Besides file-manager i used Filezilla too and the same thing happens(my hosting company as it su#%$ failed to give me a reply after 48h).
I have recorded a short video of my problem to help you better understand my issue.
https://www.youtube.com/watch?v=wqL35R0-vvw&feature=youtu.be
Hope you`ll be able to help me. Thank You !
I`m working on this website for an NGO after it was hacked and for now I want to wipe every single file from the server and rebuild it but those files which have inside infected pages(php scripts) wont get deleted
Chances are very good some of those files are owned by the webserver, especially if you were compromised via a WordPress vulnerability. As they're owned by the webserver and not your user, you're unable to delete them.
If you have root/sudo access, you can use that on the command-line to remove them. If you don't, you'll need your host to help.

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Silverlight deployment to browser

I have an application for a huge business, which needs many pages, controls etc. The .xap file easily goes up to 50MB. I notice that every time when I load the page, the .xap file got downloaded to my local. However, my users may use 3G network to connect, so it must be very slow if we downlaod the app everytime they open the page. So I was wondering if there is some way I can do the deployment similar to WPF, which only download to local when the version is changed....
Any other suggestion to improve the loading speed is welcomed.
Thanks a lot
First and for most get your web server caching headers sorted. Typically you open the ClientBin folder in IIS Manager and enter the HTTP Response Header section. Set expiry to something like 1 Day (or if you update during normal working hours set to 15 Minutes). Note just because the content expires doesn't mean it will be re-downloaded but it does mean it'll get cached before being used. The browser will inform the server of the version it currently has if it has expired allow the server to simply respond with "go ahead and use that it hasn't changed since the last time you checked".
For such a large system you should seriously consider dividing the app up into multiple dll projects. Then use the Application Library Caching feature found in the main apps project properties. You need to create the appropriate .extmap.xml files for each of your dlls. Many of the SDK and Toolkit dlls have them already. This results in separate .zip files for these dlls being placed in the ClientBin folder and not incorporated into one large Xap. This allows you separate slow moving / never changing code into a set of zips and more frequently changing business code into another set. When you update the app the you only update the changed zips thus reducing the download burden of a new version. (Note this only works with inbrowser based apps).
In the serverlight project option, check the Reduce XAP size by using application library caching.

Fogbugz database schema management

This is a very simple question, and maybe the man himself can provide insight on this :)
Does anyone know the pseudocode behind how Fog Creek does database schema management?
I'm running into an issue and I'm trying to figure out if I'm handling it right... I have a module that runs each time someone spins up their site and examines their database to make sure that they have the right changes in place. if they are missing changes, then the script makes the required changes.
My issue is that I was trying to tie it to the session_start portion of the Global.asax, but it seems to be rather flaky at times, and I'm trying to come up with a better scenario.
For reference, I'm trying to run 1 x web application that can respond to any number of hosts, where the host maps via a metabase to find out what database it belongs to and then makes the necessary connections.
You might have more luck asking this on http://fogbugz.stackexchange.com/

Resources