High CPU usage. top show a lot of /usr/sbin/apache2 -k start processes - apache2

I have a high PU usage on Digitalocean droplet. I have 1 GB Memory / 25 GB Disk / AMS3 - WordPress 5.5.1 on Ubuntu 20.04
It's a very simple WP blog with 50 users/day. So, there is no traffic or complicated installation that could affect CPU.
I have checked with "top / atop/ htop" commands via the terminal, all of them give me many processes with :
www-data /usr/sbin/apache2 -k start
I was trying:
sudo /etc/init.d/apache2 restart
Power off
Disabling and re-enabling all the plugins
All of these were giving temporal results. CPU usage were going till 1%. In 1-2 hours the CPU usage were growing till 100%.
I tried to : sudo kill -9 pid . It is also temporal.
In "top" command I was trying to check the processes by pid number and to find them in error.log on lib/log/apache2 it shows me just some debug/info precesses that there were certain URLs open.
I also have changed instead of in etc/apache2/sites-enabled it did not work.
I don't know if I have to look for the mistakes in Apache2 installations or I simply have to increase the capacity of my droplet. I have a lot of memory left. So, I don't know.
Any ideas?

Related

Sage Maker Studio CPU Usage

I'm working in sage maker studio, and I have a single instance running one computationally intensive task:
It appears that the kernel running my task is maxed out, but the actual instance is only using a small amount of its resources. Is there some sort of throttling occurring? Can I configure this so that more of the instance is utilized?
Your ml.c5.xlarge instance comes with 4 vCPU. However, Python only uses a single CPU by default. (Source: Can I apply multithreading for computationally intensive task in python?)
As a result, the overall CPU utilization of your ml.c5.xlarge instance is low. To utilize all the vCPUs, you can try multiprocessing.
The examples below are performed using a 2 vCPU + 4 GiB instance.
In the first picture, multiprocessing is not set up. The instance CPU utilization peaks at around 50%.
single processing:
In the second picture, I created 50 processes to be run simultaneously. The instance CPU utilization rises to 100% immediately.
multiprocessing:
It might be something off with these stats your seeing, or they are showing different time spans, or the kernel has a certain resources assignment out of the total instance.
I suggest opening a terminal and running top to see what's actually going on and which UI stat it matches (note your opening the instance's terminal, and not the Jupyter UI instance terminal).

Sustes in Solr running and using lots of CPU power

I am trying to figure out what this process is:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2398 solr 20 0 709920 16412 1364 S 771.5 0.0 19:39.02 sustes
I thought maybe it was from a script doing an optimization of the database - but I've disabled that and its still there even after a reboot. Seems excessively high!
Solr doesn't usually launch any external processes, and the Internet seems to indicate your server has been compromised and someone is running a cryptominer binary on your server.
It's time to kill any access to the server, recreate the server and firewall it off from the world, find out how they got in and make sure it doesn't happen again.
It shows some external process is running - Maybe CryptoJacking which is Bigger than Ransomware, It's a new money maker.
Please check below files for any unwanted logs and any connection to external URL's
1. var/log/syslog and
2. Or if you are using window/Linux machines - check window service/process(Window) or any cron job(Linux).

Performance changes significantly every time I rerun program

I have a Rasberry PI like hardware which has a basic linux operating system and only one program running called "Main_Prog".
Every time I run a performance test again Main_Prog I get a less than 1% performance fluctuation. This is perfectly acceptable.
When I kill Main_Prog using the kill command, and re-start Main_Prog, the performance changes up to 8%. Further performance tests will vary less than 1% around this fluctuation.
Example
So for example if Main_Prog at first ran at 100 calls/sec and varied between 99-101 calls/sec.
I then did a "kill" command against Main_Prog and restarted using "./Main_Prog &". I then run a performance test and now Main_Prog is running 105 calls/sec with 104-106 calls/sec fluctuation. It will continue to run 104-106 calls/sec until I kill the Main_Prog and start it.
Any idea how to prevent fluctuation or what is happening? Remember, it is VERY consistent. No other programs running on operating system.
Your temporary fluctuation might be related to the page cache. I would not bother (the change is insignificant). See also http://www.linuxatemyram.com/
You might prefill the page cache, e.g. by running some wc Main_Prog before running ./Main_Prog
And you probably still do have some other executable programs & processes on your Raspberry Pi (check with top or ps auxw). I guess that /sbin/init is still running at pid 1. And probably your shell is running too.
It is quite unusual to have a Linux system with only one process. To get that, you should replace /sbin/init with your program, and I really don't recommend that, especially if you don't know Linux very well.
Since there are several processes running in your box, and because the kernel scheduler is preempting tasks at arbitrary moment, its behavior is not entirely reproducible, and that explains the observed fluctuation.
Read also more about real-time scheduling, setpriority(2), sched_setscheduler(2), pthread_setschedparam(3), readahead(2), mlock(2), madvise(2), posix_fadvise(2)
If you are mostly interested in benchmarking, the sensible way is to repeat the same benchmark several times (e.g. 4 to 15 times) and either take the minimum, or the maximum, or the average.

MongoDB available connections

I installed MongoDB on Windows, Mac and Linux.
I run MongoDB with all default arguments and I enter a command db.serverStatus().connections on mongo to check the available connections.
Here is my observation, Windows 7 has 19999, Mac has only 203 and Linux has 818. Therefore, I would like to ask what makes the number of available connections different and is it possible to increase the available connections?
For UNIX-like systems (i.e. Linux and OS X), the connection limit is governed by ulimits. MongoDB will use 80% of the available file descriptors for connections, which is why you see 203 on Mac (~80% of 256) and 819 on Linux (~80% of 1024).
The MongoDB documentation includes recommended settings for production systems. Typically you wouldn't need to change this on development environments, but you will see a warning on startup if the connection limits are considered low.
In MongoDB 2.4 and earlier, there is a hard-coded maximum of 20,000 connections per server irrespective of the ulimits. This maximum has been removed as at MongoDB 2.6.
There is also a maxConns MongoDB configuration directive that can be used to limit the connections to something lower than what would be allowed by ulimits.
#fmchan Turn off SELinux and check again.
I set high NOFile and NProc limits on systemd, and in /etc/security/limits.conf file. But, it didn't help
Now, the only thing that works for me is to
1. setenforce 0 && systemctl restart mongod.service
2. Write a SELinux policy to allow mongod_exec_t to setrlimit and rlimitinh
Here's a similar issue https://bugzilla.redhat.com/show_bug.cgi?id=1415045
I was facing the same issue and apparently ulimit -n 6000 didn't help.
On macos we could check the setting for open files using below command which was showing 256 in my case and 80% of 256 was 204 hence the max connection by mongo was 204.
launchctl limit maxfiles
I have followed https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c to resolve my error. This explains in details how to set the values.
Follow the documentation, restart laptop and you should see the setting

Auto generate w3wp.exe process dump file when CPU threshold is reached even when PID changes

I'm trying to troubleshoot an issue with one of our websites which causes the CPU to spike intermittently. The site sits on a farm of web servers and it intermittently happens across all servers at different times. The process which causes the spike is w3wp.exe. I have checked all the obvious things and now want to analyse multiple sets of dump files for the w3wp.exe which causes the spike.
I'm trying to automatically generate a dump file of the w3wp.exe process when it reaches a specified CPU threshold for a specified time.
I can use ProcDump.exe to do this and it works a treat IF it's fired before the PID (Process ID) changes.
For example:
procdump -ma -c 80 -s 10 -n 2 5844(where 5844 is the PID)
-ma Write a dump file with all process memory. The default dump format includes thread and handle information.
-c CPU threshold at which to create a dump of the process.
-s Consecutive seconds CPU threshold must be hit before dump written (default is 10).
-n Number of dumps to write before exiting.
The above command would monitor the w3wp.exe till CPU spikes 80% for 10 seconds and it would take full dump at least for two iterations.
The problem:
I have multiple instances of w3wp.exe running so I cannot use the Process Name, I need to specify the PID. The PID changes each time the App Pool is recycled. This causes the PID to change before I can capture multiple dump files. I then need to start procdump again on each web server.
My question:
How can I keep automatically generating dump files even after the PID changes?
USe DebugDiagnostic 2.0 from Microsfot: https://www.microsoft.com/en-us/download/details.aspx?id=49924
It handles multiple w3wp.exe processes. If you need a generic solution, you will have to write a script - such as https://gallery.technet.microsoft.com/scriptcenter/Getting-SysInternals-027bef71

Resources