I have two locations for "local" development work on a Drupal site (office and home) and use Github as a central repository and the module Backup and Migrate to backup/restore the database when I alternate between my two locations. After a days work I backup the database and push my code to github. At home I can pull the code from Github and restore the database. This all works very good.
But... I use wampserver on both locations and at home it takes about 9 seconds to restore the database and the waiting time after every click on the site is less then a second. At the other location it takes 5 minutes to restore that same database and it takes 4-5 seconds before every page shows.
At work I just tried to re-install wampserver with no success... the site is just very slow. Both computers are new with 16 GB RAM.
Can anyone give me a hint on what to do, to speed up my slow wampserver?
This is all very strange, as the setup is identical on both computers.
I should mention that I have increased the realpath_cache_size on the slow computer without any change.
SOLVED
Following one of the the advices at wamp-is-running-very-slow
I was able to solve this problem.
The things that made the trick was to set the following line to "2":
innodb_flush_log_at_trx_commit = 2... in the file my.ini (mysql settings).
And also to raise these buffers to 512M and 128M respectively:
innodb_buffer_pool_size = 512M
innodb_log_file_size = 128M
... in that same file. None of the other suggestions made any difference.
I hope this can help somebody.
Related
I have recently upgraded my website to the dot net nuke version- 9.4.1 but here getting performance issue, the website runs slow. I have searched for this and applied the performance configuration inside the server setting and also did the cache configuration at the page level.
I have minified the files(Js and CSS) and have updated the setting value inside the host setting table.
Thanks a lot in advance.
Check the DNN Scheduler to see if there are any active jobs that are taking longer than they should. For example, if the Site Crawler scheduler is constantly running then you should check the files in the Portals folders to make sure all of the files located in the Portals folder should actually be there. The crawler is rebuilding the index and if you have a lot of files it could take hours to complete. If the files should be there, disable the crawler scheduler and then run during your slowest time of the day (1:00 AM?). I ran into this problem on a server that had hundreds of thousands of documents in the portals folder. Ended up solving it by running the crawler between 1:00 AM and 5:00 AM for a few days until it indexed all of the files. Once the files are indexed it will only have to index changed and new files; so it should just be a burden the first time it runs.
Another possible cause are exceptions. If your site is throwing a large amount of exceptions it will slow down your site. The handling of the exceptions and then the logging of them (to the DNN EventLog table in the database and the Log4Net files) can be brutal if your site is constantly throwing exceptions. If your site is also running in DEBUG mode the performance hit is multiplied by at least 30 times due to .Net collection all of the additional information about the exception while running in debug mode. That will be brutal to your sites performance.
Check the server logs to see how often IIS is recycling the application pool for your DNN site. If it's occurring often then that is also a sign of a large amount of exceptions being thrown if you are using the default IIS application pool settings. By default, IIS will recycle your application pool if too many exceptions are thrown within a short period of time. If you also have the option set to bring up a new instance of your site and run it side by side before IIS terminates the existing instance while your site is throwing exceptions that can cause a bottleneck and will cripple performance. For this situation, I usually disable IIS from recycling the application pool if too many exceptions are thrown within a short period of time. That may not be the best option for you but if you are on top of the exceptions being thrown on the site then you can disable that and let IIS run instances side by side after an app recycle (this is nice to have when you recycle during active periods so that all existing traffic completes with the old instance and all new traffic is sent to the new instance. Once all traffic is hitting the new instance of your site IIS will terminate the older instance.)
If none of the above help, run SQL Profiler on your database to see if there is any extreme database activities going on. Also check for any db locks.
There are a lot of possible causes that can slow down DNN. The best way to find out what is going on is to run a profiler on the server (RedGate Ants profiler or Telerik (Progress) Just Trace).
I am trying to install piwik on my machine using XAMPP as it requires PHP, apache and mysql. When i installed XAMPP and launched apache, it worked fine but when i am trying to access MySql admin through XAMPP or access sample php page (copied on xampp/htdocs folder), apache is getting redirected to IIS which is then not able to view page, showing 404.3 error (its looking for file in wwwroot, which is also not working after pasting in wwwroot).
My objective is to make piwik up and running on my machine. Another option is Unix server which is very new to me (i have been working in Windows ). I know one should have a web server loaded with MySql and PHP to run piwik (which is hardly half an hour job once we have all these) but coz different resource available, i am struggling to get this thing done.
Any help is highly appreciated.
After two days of R&D, i figured it out (well, sort of). So for anybody who is new in piwik and in linux/apache/php/mysql, here is what i did to achieve the objective. For those of you who are familiar with all these or few of these, you might have a better way or answer and i would really request you to improve this answer but this is to help someone who is new and does not know all things at once.
Here is what i did
--get apache
--get php (one of the requirement for piwik)
--get sql (again requirement)
--all should be running (get wamp (its carrying all three of the above)), make sure apache is running on port 80.
--install mysql but make sure only wamp's sql is running
--set passowrd for root in wamp->mysql-->mysqlConsole-->set password for 'root'#'localhost'=password('yourPasswordHere');
--Paste piwik folder in www directory
--open localhost through wamp,login in adminer and phpmyadmin (bottom right)
--run http://localhost:8080/piwik-->u will see directory listing, click on folder piwik. You should be able to see welcome screen for piwik installation
--make sure we have php 7 as selected version in wamp( for php, wamp was carrying two version so we can choose) to avoid error in system check step of installation
--Follow the steps which are very straight forward.
This is something which worked out for me. Actual answer might be simpler or better. Hope it will help someone who is probably banging his/her head around and hitting some walls without getting something concrete.
Happy to help!!!
Unfortunately an Ubuntu machine I manage has stopped working, and after much work it seems like I'll have to reinstall the system. All the data from the old system is intact and backed up.
Among this data is a PostgreSQL installation with some databases (that was running isolated on this machine.) My goal is to move this data as is, and run it on the fresh install.
Since the old system is not running, I can't do a pg_dump.
According to this article it should be possible to mode the data folder, but there are two restrictions mentioned. What I do not fully understand is if this will be a problem for me?
I cant seem to find much information on this online, since all refer to the preferred pg_dump-method.
Any help would be highly appreciated.
As per the suggestions in the comments to the initial answer, the Postgres directories where copied to a new host and the database started without any problems.
I've got a wordpress site that I have been using for a year now and it is hosted with HostGator. I have got a few tests i would like to run on the site, but I would like to test it offline using wamp first before making it LIVE.
The problem is previously I was always making changes to the LIVE site, usually at hours when I get little to no traffic. However, that has changed now and I do get traffic most hours through out a 24hr day.
So my problem is:
How do i download my existing website to laptop (wamp) and make those changes with new theme? (total newbie, sorry!)
I use Windows 7, so not sure what I need to be doing to get the site working like a live site offline.
Once I have implemented the new changes, what is the best way to upload the updated site back to the HostGator server without having any downtime or errors for site visitors?
Is there anything else I need to install or do inorder for this to work? I hope you can give me as much information as possible or any links to any guides or articles that explain how to do this.
Thanks so much for any help you can offer!!!
If you're using Hostgator, the process is simple:
Install XAMPP or WAMPP on your computer;
Go to your cPanel, backup and download your website;
Extract the backup to your computer, specially the homedir and the sql;
Go to your local environment, access http://localhost/phpmyadmin
Create a new database, doesn't matter the name but for the example let's call it "database";
Inside that database, import the one taken from the backup;
create a new folder inside your htdocs with the name of your website, "example.com";
Extract the content of the homedir there;
edit wp-config with the following data:
Host: 'localhost'
Username: 'root'
Password: blank
access http://localhost/example.com
You can check a good tutorial about the subject here.
About putting the site live, I recommend you to use a GIT repository, however it's understandable that might be a little complicated and perhaps too much work for what you're trying to achieve.
Try to move your files directly from your local to live environment using Filezilla or WinSCP, the drag and drop should replace the files live and the downtime should be minimal.
Instead of WAMP, you can always use VirtualBox to install CentOS or Ubuntu/Debian.
You can go one further and install either CentminMod to automate creating a LAMP, or a full panel like ISPConfig or Virtualmin.
That take care of create the environment.
Create a new account on the LAMP, using the same domain name.
You can FTP with Windows to get the files, but networking Windows and Linux is a pain. The better option is to use the command line (CLI) in the Linux VM to ftp the files from Hostgator to the VM. This guide will help with that process: http://www.tldp.org/HOWTO/FTP-3.html
Then your only concern is the MySQL database. And for this, you have several options.
For me, the easiest is to buy (or try!) SQLyog on Windows, and then copy the database from the Hostgator source to the localhost destination. Some mild networking is needed for Windows to see the Linux VM, but nothing as complex as file sharing (the FTP issue). SQLyog is far quicker than backing up the database, then restoring it -- especially since you can run into memory issues doing it this way. It fully depends on the size of the database.
The cheap/free backup>restore method is to use phpMyAdmin.
WordPress also has plugins, of varying cost, but you still have the possible backup>restore memory issue there as well.
When done, just copy it the other way, again using SQLyog and CLI ftp. You'll still have some downtime, but it will hopefully be minimal.
As a newbie, this probably seems like rocket science, but at least it gives you a good place to start. Welcome to the world of locally dev'ing sites!
I have a site that I deployed to Heroku. It's a low traffic site so if nobody goes to it for a couple hours and then go to it, it will take about 5-10 seconds to load. Any other requests to other pages on that site loads up fine quickly. If I exit the site entirely and check back in a few minutes later, it also comes back up quickly.
It's only if it's left idle for a couple hours that the spin up time is noticeable. Does anyone else have this issue? If so, how did you fix it.
Also while on the topic, does the same thing happen with Google App Engine? I'm currently just trying out these app hosting platforms so I don't really have any preference for technologies/languages.
Quickest way to "fix" this problem is to make sure your site is always up. Set up a pingdom account (http://www.pingdom.com/) which will ping your site every few minutes just to keep it alive.
I have a special route myapp.com/keep_alive which does nothing except hit the rails stack without caching.
Hopefully this helps!
Do you happen to be hosting it with the 'free plan', ie. only with 1 dyno?
If so, what you experience might be a Dyno Idling. You can increase the number of the dynos so that your app is 'always-on'
From what I understand Heroku makes public this behaviour.
For free site hosting, one heroku 'Dyno' is dedictaed to your site, if the dyno is inactive for a period of time then the resource will be redirected elsewhere, when you try access the site after this time the system has to go request a Dyno back.
You can prevent this by paying for extra dyno's which will stick with your site or you can visit the site on a regular basis yourself with a automated script.
The best thing you can do to decrease this time is to minimize the size of your slug. This includes steps like deleting any PSD or AI image assets, removing PDFs, and minimizing your gem set. For more information see: http://devcenter.heroku.com/articles/slug-size. As a reference, my applications can usually spin up in under around one second.
If you don't want to pay for Pingdom, you can try the open source alternative: Pinger
https://github.com/austinthecoder/pinger