I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).
Related
I have successfully deployed the MobileFirst Platform to Bluemix a number of times. I am able to create the container group for the operations console (MFP Server), and then push an app to this server. From there, I am able to install and run the app on my physical devices (a few Apple devices and a couple of Android devices). At this point every thing works fine.
However, after some period of inactivity (about one day), the MFP runtime is no longer available on the server. I can still log into the operations console, but the runtime is missing. FYI, I am using a Cloudant DB service.
One thing I have noticed is the container "Created" time is shorter than the time that has past since I actually created the container. For example, I created a container 6 days ago, but the container shows (in the Bluemix console) that the container was created 4 days ago. Is it possible the container is getting recreated, thus loosing connectivity to the DB, or otherwise corrupting the MFP runtime?
I have a restful angular app that is hosted on a AWS and I'm looking for a clean and quick deployment solution to put the new site live without taking down the previous. I don't have much DevOps experience so any advice would be great. The site is full RESTFUL so its just static pages.
I was looking at setting up a dokku with AWS plugin solution but was pretty sure its overkill and may not be able to detect my app because its just static pages (no node, rails, etc).
The best way to do this is to reconfigure the web server on the fly to point to the new application.
Install the new version of the app to a new location, update the web server config files to point to the new location, and reload the server.
For inflight requests, they will be satisfied by the old application, and all the new requests will hit the new application, with no down time between them save for the trivial delay when refreshing the web server (don't restart it, just tickle it to reload it's configuration files).
Similarly, you can do this solely at the file system, by installing the new app in a new directory parallel to the old one. Then:
mv appdir appdir.bak
mv appdir.new appdir
But this is not zero downtime, but it is a very, very short down time as the two inodes are renamed. Just ensure that both the new and old directories are on the same filesystem, and the mv will be instantaneous. The advantage is that you can trivially "undo" the operation in the same way.
There IS a window where you have no app at all. For a fraction of a second there will be no appdir, and you will serve up 404's for those few microseconds. So, do it when the system is quiet. But it's trivial to instrument and do.
We ended up going with TeamCity for our build/tests and deploying via Shipit.
https://github.com/shipitjs/grunt-shipit
https://www.jetbrains.com/teamcity/
Try to use git repo for live deployment https://danbarber.me/using-git-for-deployment/
A simple solution is to use a ELB. This will enable you to deploy a new instance, deploy the code, test it, update the ELB to switch traffic to the new instance and then you can then remove the old instance.
An easy solution to this is to always be running two instances, a production and a staging. These guys should be identical and interchangeable (because they are going to switch. Assign an elastic ip to your production. When it's time to update, copy the code onto the staging, make sure it's working, and then attach the elastic ip to staging. It is now production and production is now staging. This is not an ideal solution but it is very easy and the same principals apply to better solutions.
A better solution involves an elastic load balancer. Make sure you have 2 instances attached. When it is time to update, detach an instance, perform your update, make sure it is working and reattach it. Now you will have a brief point in time where the client could get either your new website or your old website. Detach the other old note, perform the update and reattach.
The fact of the matter is even if you just overwrite files on the live server there will only be a 10ms window or so where the client could get a new version of one file (e.g. the html) and the old version of another (e.g. the css). After that it will be perfect again.
We have a strange problem. We have a click-once deployed application at a customer site that is experiencing slowness. It happens every time they launch the application, regardless of whether new updates have been applied or not. So it has nothing to do with first time loading slowness. The target framework is .net 4.5 and the application itself is a wpf application.
If we execute the .exe directly from where the click-once install puts the files in, then there is no delay whatsoever.
As far as I can see there is nothing that we are doing that is special in code...that is specific to click-once installation.
Any ideas?
when a clickonce application starts up, it checks for updates. Where do you have the updates stored? That may be where the slowness is coming into play.
On the publish tab, under updates, you can specify how often your app will check for updates. It can check for updates after the application starts which will speed up start time, however, updates will not be installed until the next time your app runs.
My WPF application currently only shows a screen with some controls, it doesn't connect to DB or has any other functionality. It's a simple UI screen.
When I was testing in some computers (WinXP SP2), I've detected that it took more than 15 seconds to startup. They were all in our domain.
I've grabbed a similar computer, only with Windows installed, and the application took 2 seconds to startup.
Then I added the computer to our domain, and testing it with a domain user showed that it also took 15 seconds to startup. I tested again with the previous user (local user) and it's still fast. I created another local user, but it takes the 15 seconds that the domain user also takes.
I've added other local users but they were also slow.
To summarize: the application starts fast (2 sec) in only one user, the first one I tested. All other users (domain or local) are slow (15 sec).
I've been checking Improving WPF applications startup time but my problem seems to need a different approach. Does anyone figure out what can be happening?
I found another solution to this problem in this documentation from Microsoft.
Adding the following configuration to the app.config file will also solve the problem:
<configuration>
<runtime>
<generatePublisherEvidence enabled="false"/>
</runtime>
</configuration>
This way, you don't need to change computer configurations. It's just configuration of the application.
UPDATE:
Seems that .NET 4.0 fixed this issue, as documented here on MSDN.
Is the system connected to a network, but cannot reach the internet because the proxy is not configured? If so, go to Internet Settings (i.e. Internet Explorer Properties), Advanced, and look in the tree view for Security and a checkbox like "check revoked certificates" or something (I'm using German Windows, so I don't have the English label at hands). Uncheck and test again.
If this fixed the problem, you have one signed assembly that is not from Microsoft for which the .NET Framework will check for revocations, and time out after 15 seconds. If you disable the checking or configure the internet connection properly, you won't have to wait.
Does it open up a file or interact in the network in some way? Because if not, I would suggest that whether or not you're logged into a domain or running as a local user is probably a red herring.
Are you building in debug or release mode? It's worth trying release mode if you've not already because running in debug does a load of extra error checking..
Have you checked if there are any domain policies that can affect this scenario?
I had still this Problem (.NET 4.5). I my case the problem was, that the computer was not connected to the internet, but there were some other device (cameras etc.) which were connected via GigE. The startup of every .NET Application was delayed for about 20 seconds.
The solution was quite easy: Just connected the computer once to the internet, started any .NET application (first startup took about 7 seconds) and after that, every startup was quite fast, even if the computer was no longer connected to the internet. In addition I had to disable the protocol TCP/IP V6 (caused 3-5 seconds delay).
Another possible solution is to select Properties for the "internet Protocol Version 4 (TCP/IPv4), then select Advanced, select the tab "WINS" and set "Disable NetBIOS over TCP/IP".
When I access a wrong call to a sql server data into my application in classical ASP I get this message in my entire site: Service Unavailable. It stopped. My site is in a remote host. DonĀ“t know what to do. What can I tell to the "support team" of them to fix that?
If you check out Administration Tools/Event Viewer - Application log you will probably see an error message.
This should give you more information as too why the application pool died or why IIS died.
If you paste this into your question we should be able to narrow things down a bit.
Whenever there are a number of subsequent errors in your asp.net page, the application pool may shut down. There's a tolerance level, typically 5 errors in 10 mins, or so. Beyond this level, IIS will stop the service. I've run into a lot of problem due to this error.
What you can do is either fix all your websites (will take time), or increase the tolerance level or just disable the auto shutdown system. Here's how
Run IIS
Right click on the node 'Application Pools' in your left sidebar.
Click on the tab 'Health'
Remove the check on 'Enable Rapid Fail Protection'
or change the tolerance level.
Hope that helped.
One reason you can get this is if the application pool has stopped.
Application pools can stop if they error. Usually after 5 errors in 5 minutes IIS shutsdown the AppPool. It is part of the Rapid-fail protection and it can be disabled for an AppPool otherwise the AppPool has to be restarted every time it happens.
These settings can be changed by the IIS administrator. It looks like you can setup a script to restart and app-pool so you should be able to set up a new web application (in a different app-pool) to restart your closed app-pool. Hoster might not like that though.
Best result for you would be to catch all the exceptions before they get out into IIS.
Could be a SQL exception in your Application_Start (or similar) method in Global.asx. If the application (ASP.NET worker process) can't start, it can't run, so the worker process has to shut down.