FastCgiIpcDir problems in error log
Hi,
I have noticed in my Apache error logs the following error (error 1):
[Wed Feb 08 14:00:06 2012] [alert] [client 41.185.88.175] (2)No such file or directory: FastCGI: failed to connect to (dynamic) server "/var/www/bin/php-splashpage-user/php-fastcgi": something is seriously wrong, any chance the socket/named_pipe directory was removed?, see the FastCgiIpcDir directive
Directly afterwards followed by this error (error 2):
[Wed Feb 08 14:00:06 2012] [error] [client 41.185.88.175] FastCGI: incomplete headers (0 bytes) received from server "/var/www/bin/php-splashpage-user/php-fastcgi"
How do I fix error 1?
I read about changing this due to the host system cleaning out the "/tmp" directory ( the default dir for fastCgiIpcDir if not defined ) periodically and thus obliterating communication with current active FastCGI services. So I decided to give it a go. I set the FastCgiIpcDir in fastcgi.conf file in hopes of success, but there is simply no change.
This is the contents of my fastcgi.conf file:
<IfModule mod_fastcgi.c>
FastCgiIpcDir /var/lib/apache2/fastcgi_test
FastCgiConfig -idle-timeout 60 -maxClassProcesses 1
FastCgiWrapper On
AddHandler php5-fcgi .php
Action php5-fcgi /cgi-bin/php-fastcgi
<Location "/cgi-bin/php-fastcgi">
Order Deny,Allow
Deny from All
Allow from env=REDIRECT_STATUS
Options ExecCGI
SetHandler fastcgi-script
</Location>
</IfModule>
Permissions and onwerships of /var/lib/apache2/fastcgi_test:
drwxr-xr-x 3 www-data www-data 4.0K 2012-02-08 09:20 fastcgi_test
My php wrapper script php_fastcgi has the following lines:
#!/bin/sh
PHP_FCGI_CHILDREN=120
export PHP_FCGI_CHILDREN
PHP_FCGI_MAX_REQUESTS=1000
export PHP_FCGI_MAX_REQUESTS
umask 0022
exec /usr/bin/php-cgi -d apc.shm_size=50
I am running PHP 5.3.1, Apache/2.2.14, Ubuntu 10.04.
Here's few things I've picked up so far:
Error 1 only appears at the beginning of an hour say 6 seconds just after the new hour
From working with mod_FastCgi I have learnt that increasing the child processes help relieve some of the "error 2" errors ( which cause the HTTP 500 error at random intervals). Currently I am not quite sure what the affect of error 1 would be, however if error 2 follows directly after then it's safe to say it's not a good thing.
There is very little, if any, full information on errors reported by Fastcgi with tried and tested solutions. Sadly I may just be adding onto the piles of Fastcgi errors posted on the web with no conclusion.
Your help, advice or tips in resolving error 1 would be readily appreciated.
I don't know how to make it work with a wrapper, and suexec, but you should try this:
http://blog.kmp.or.at/2013/06/apache-2-2-on-debian-wheezy-w-php-fpm-fastcgi-apc-and-a-kind-of-suexec/
The solution in the link does not even use suexec, nor the wrapper, and that way at least it works.
The steps required for that:
0) install php5-fpm, and apache-mpm-worker if not already installed
1) comment this line :
#FastCgiWrapper On
2) make an alias:
Alias /cgi-bin/php-fastcgi **/var/something**
3) fastcgiexternalserver:
FastCgiExternalServer **/var/something** -socket php5-fpm-site1user.sock
(The strongly typed paths must be the same)
4) set up the conf in php5-fpm/pool.d/site1user.conf
[site1user]
user = site1user
group = site1user
listen = /var/run/php5-fpm-site1user.sock
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
chdir = /
5) restart fpm
service php5-fpm restart
6) for deeper understanding check my other answer here
https://serverfault.com/questions/524708/php5-fpm-apache2-on-wheezy-connect-failed-with-fastcgi/536277#536277
Related
I'm developing a tool that modifies LUKS partitions and disks.
Everything is working very well. Until now...
To handle disks properly as a non-root user, I added some polkit rules to change password, open partition, change crypttab and many others.
But, I'm seeing problems when I change crypttab and I need to run dracut to apply some dracut modules (dracut --force). Specially, the last one.
My user is part of admin group and I added a rule into sudoers file to not ask sudo password when my application is executed.
So, I decided to use this code:
gchar *dracut[] = {"/usr/bin/sudo", "/usr/bin/dracut", "--force", NULL};
if ((child = fork()) > 0) {
waitpid(child, NULL, 0);
} else if (!child) {
execvp("/usr/bin/sudo", dracut);
}
It is not working because SELinux is preventing to run this command:
SELinux is preventing /usr/bin/sudo from getattr access on the chr_file /dev/hpet.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that sudo should be allowed getattr access on the hpet chr_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'sudo' --raw | audit2allow -M my-sudo
# semodule -X 300 -i my-sudo.pp
Additional Information:
Source Context system_u:system_r:xdm_t:s0-s0:c0.c1023
Target Context system_u:object_r:clock_device_t:s0
Target Objects /dev/hpet [ chr_file ]
Source sudo
Source Path /usr/bin/sudo
Port <Unknown>
Host <Unknown>
Source RPM Packages sudo-1.8.25p1-4.el8.x86_64
Target RPM Packages
Policy RPM selinux-policy-3.14.1-61.el8.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name jcfaracco#hostname
Platform Linux jcfaracco#hostname 4.18.0-80.el8.x86_64 #1
SMP Wed Mar 13 12:02:46 UTC 2019 x86_64 x86_64
Alert Count 9
First Seen 2019-06-14 19:32:42 -03
Last Seen 2019-06-14 19:42:46 -03
Local ID 772b2c41-2302-4ee0-8886-52789eb63e22
Raw Audit Messages
type=AVC msg=audit(1560552166.658:199): avc: denied { getattr } for pid=2291 comm="sudo" path="/dev/hpet" dev="devtmpfs" ino=10776 scontext=system_u:system_r:xdm_t:s0-s0:c0.c1023 tcontext=system_u:object_r:clock_device_t:s0 tclass=chr_file permissive=0
type=SYSCALL msg=audit(1560552166.658:199): arch=x86_64 syscall=stat success=no exit=EACCES a0=7ffd4a6dffb0 a1=7ffd4a6def20 a2=7ffd4a6def20 a3=7fe845a73181 items=0 ppid=1756 pid=2291 auid=4294967295 uid=982 gid=980 euid=0 suid=0 fsuid=0 egid=980 sgid=980 fsgid=980 tty=tty1 ses=4294967295 comm=sudo exe=/usr/bin/sudo subj=system_u:system_r:xdm_t:s0-s0:c0.c1023 key=(null)ARCH=x86_64 SYSCALL=stat AUID=unset UID=gnome-initial-setup GID=gnome-initial-setup EUID=root SUID=root FSUID=root EGID=gnome-initial-setup SGID=gnome-initial-setup FSGID=gnome-initial-setup
Hash: sudo,xdm_t,clock_device_t,chr_file,getattr
Do you know how to fix this issue? Any other idea to call dracut inside a C code is welcome too. In case of any other smart way to perform this issue.
Nagiosql generated files make problems during preflight check - but everythings seems to be okay.
/etc/nagios/nagios.cfg
....
## Hosts
cfg_dir=/etc/nagiosql/hosts/
cfg_file=/etc/nagiosql/hosttemplates.cfg
cfg_file=/etc/nagiosql/hostgroups.cfg
cfg_file=/etc/nagiosql/hostextinfo.cfg
cfg_file=/etc/nagiosql/hostescalations.cfg
cfg_file=/etc/nagiosql/hostdependencies.cfg
## Services
cfg_dir=/etc/nagiosql/services/
cfg_file=/etc/nagiosql/servicetemplates.cfg
cfg_file=/etc/nagiosql/servicegroups.cfg
cfg_file=/etc/nagiosql/serviceextinfo.cfg
cfg_file=/etc/nagiosql/serviceescalations.cfg
cfg_file=/etc/nagiosql/servicedependencies.cfg
...
nagios -v /etc/nagios/nagios.cfg
....
Running pre-flight check on configuration data...
Checking services...
Error: There are no services defined!
Checked 0 services.
Checking hosts...
Error: There are no hosts defined!
Checked 0 hosts.
The content seems okay to me
[root#xxx services]# cd /etc/nagiosql/services/
[root#xxx services]# ls -alh
total 20K
drwsr-sr-x 2 apache nagios 4.0K Aug 7 10:46 .
drwsr-sr-x 5 apache nagios 4.0K Aug 7 12:17 ..
-rw-r--r-- 1 apache nagios 2.3K Aug 7 10:46 localhost.cfg
-rw-r--r-- 1 apache nagios 2.2K Aug 7 10:46 www.google.com.cfg
-rw-r--r-- 1 apache nagios 1.1K Aug 7 10:46 www.yahoo.com.cfg
[root#xxx hosts]# ls -alh
total 16K
drwsr-sr-x 2 apache nagios 4.0K Aug 11 07:12 .
drwsr-sr-x 5 apache nagios 4.0K Aug 7 12:17 ..
-rw-r--r-- 1 apache nagios 800 Aug 11 07:12 GIT.cfg
-rw-r--r-- 1 apache nagios 948 Aug 11 07:12 psm01.cfg
Content also seems to be fine (generated by nagiosql):
[root#xxx hosts]# vi GIT.cfg
###############################################################################
#
# Host configuration file
#
# Created by: Nagios QL Version 3.2.0
# Date: 2015-08-11 07:12:54
# Version: Nagios 3.x config file
#
# --- DO NOT EDIT THIS FILE BY HAND ---
# Nagios QL will overwite all manual settings during the next update
#
###############################################################################
define host {
host_name GIT
alias GIT Server
address 172.25.10.80
register 0
}
###############################################################################
#
# Host configuration file
#
# END OF FILE
#
###############################################################################
~
Can somebody tell me where the solution for this problem is? Already wasted 2 hours...
Try removing the final slash from the directory names in your cfg_dir definitions and see if that doesn´t get it to recognize the cfg files in that directory.
For example,
Change:
cfg_dir=/etc/nagiosql/hosts/
...
cfg_dir=/etc/nagiosql/services/
To:
cfg_dir=/etc/nagiosql/hosts
...
cfg_dir=/etc/nagiosql/services
EDIT:
Okay I think directory permissions may be causing the cfg_dir evaulations to fail. According to the ls -alh output you listed, your /etc/nagiosql/hosts/, /etc/nagiosql/services/, and /etc/nagiosql/ directories do not grant write permissions to the nagios group. Nagios will need to get a directory listing for those directories and will need group write permissions to do it.
To remedy:
chmod g+w /etc/nagiosql/hosts/
chmod g+w /etc/nagiosql/services/
Restart nagios service.
Also, you don't need to remove the slashes from the directory paths in the nagios cfg_dir configurations. Nagios will strip the trailing slash (/) for you, according to the code:
https://github.com/NagiosEnterprises/nagioscore/blob/eb8e83d5d05e572eb8c0d4d4764885c5427b4b69/xdata/xodtemplate.c#L327
/* process all files in a config directory */ else if(!strcmp(var, "xodtemplate_config_dir") || !strcmp(var, "cfg_dir")) {
if(config_base_dir != NULL && val[0] != '/') {
asprintf(&cfgfile, "%s/%s", config_base_dir, val);
} else
cfgfile = strdup(val);
/* strip trailing / if necessary */ if(cfgfile != NULL && cfgfile[strlen(cfgfile) - 1] == '/')
cfgfile[strlen(cfgfile) - 1] = '\x0';
/* process the config directory... */ result = xodtemplate_process_config_dir(cfgfile, options);
my_free(cfgfile);
/* if there was an error processing the config file, break out of loop */ if(result == ERROR)
break; } }
EDIT #2: In the host definition you posted, your register value is set to 0. Try setting it to 1 instead. register 0 is used for templates that will be inherited from, but will not actually show up in the Nagios UI.
Change:
define host {
host_name GIT
alias GIT Server
address 172.25.10.80
register 0
}
To:
define host {
host_name GIT
alias GIT Server
address 172.25.10.80
register 1
}
Also please set register 1 for your service definitions as well.
Try adding executable permissions to your directories. Some programs and languages require +x permissions in order to actually open the directory.
If that doesn't work, temporarily set everything to 0777 permissions to see if the issue is permissions related at all.
You also have config problems even if you get that part working. Your host and service configs don't have a use directive in them, which points to a template that would have most of the default values. The register directive is implied as 1 unless you specifically set it to 0 for a template. See the object definitions docs if you need a reference: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/objectdefinitions.html
The problem incident:
Our production system started denying services with an error message "Too many open files in system". Most of the services were affected, including inability to start a new ssh session, or even log in into virtual console from the physical terminal. Luckily, one root ssh session was open, so we could interact with the system (morale: keep one root session always open!). As a side effect, some services (named, dbus-daemon, rsyslogd, avahi-daemon) saturated the CPU (100% load). The system also serves a large directory via NFS to a very busy client which was backing up 50000 small files at the moment. Restarting all kinds of services and programs normalized their CPU behavior, but did not solve the "Too many open files in system" problem.
The suspected cause
Most likely, some program is leaking file handles. Probably the culprit is my tcl program, which also saturated the CPU (not normal). However, killing it did not help, but, most disturbingly, lsof would not reveal large amounts of open files.
Some evidence
We had to reboot, so whatever information was collected is all we have.
root#xeon:~# cat /proc/sys/fs/file-max
205900
root#xeon:~# lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 8,6 4096 2 /
init 1 root rtd DIR 8,6 4096 2 /
init 1 root txt REG 8,6 124704 7979050 /sbin/init
init 1 root mem REG 8,6 42580 5357606 /lib/i386-linux-gnu/libnss_files-2.13.so
init 1 root mem REG 8,6 243400 5357572 /lib/i386-linux-gnu/libdbus-1.so.3.5.4
...
A pretty normal list, definitely not 200K files, more like two hundred.
This is probably, where the problem started:
less /var/log/syslog
Mar 27 06:54:01 xeon CRON[16084]: (CRON) error (grandchild #16090 failed with exit status 1)
Mar 27 06:54:21 xeon kernel: [8848865.426732] VFS: file-max limit 205900 reached
Mar 27 06:54:29 xeon postfix/master[1435]: warning: master_wakeup_timer_event: service pickup(public/pickup): Too many open files in system
Mar 27 06:54:29 xeon kernel: [8848873.611491] VFS: file-max limit 205900 reached
Mar 27 06:54:32 xeon kernel: [8848876.293525] VFS: file-max limit 205900 reached
netstat did not show noticeable anomalies either.
The man pages for ps and top do not indicate an ability to show open file count. Probably the problem will repeat itself after a few months (that was our uptime).
Any ideas on what else can be done to identify the open files?
UPDATE
This question has changed the meaning, after qehgt identified the likely cause.
Apart from the bug in NFS v4 code, I suspect there is a design limitation in Linux and kernel-leaked file handles can NOT be identified. Consequently, the original question transforms into:
"Who is responsible for file handles in the Linux kernel?" and "Where do I post that question?". The 1st answer was helpful, but I am willing to accept a better answer.
Probably the root cause is a bug in NFSv4 implementation: https://stackoverflow.com/a/5205459/280758
They have similar symptoms.
I'm debugging remotely my project in PhpStorm. IDE shows 'Connected' for a moment and immediately goes into 'Waiting for incoming connection...'
Below is Xdebug log from this session
I: Connecting to configured address/port: X.x.x.x:9000.
I: Connected to client. :-)
> <init xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" fileuri="file:///xxx/info.php" language="PHP" protocol_version="1.0" appid="4365" idekey="10594"><engine version="2.2.2"><![CDATA[Xdebug]]></engine><author><![CDATA[Derick Rethans]]></author><url><![CDATAhttp://xdebug.org]></url><copyright><![CDATA[Copyright (c) 2002-2013 by Derick Rethans]]></copyright></init>
<- feature_set -i 0 -n show_hidden -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="0" feature="show_hidden" success="1"></response>
<- feature_set -i 1 -n max_depth -v 1
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="1" feature="max_depth" success="1"></response>
<- feature_set -i 2 -n max_children -v 100
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_set" transaction_id="2" feature="max_children" success="1"></response>
<- status -i 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="status" transaction_id="3" status="starting" reason="ok"></response>
<- step_into -i 4
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="step_into" transaction_id="4" status="stopping" reason="ok"></response>
<- breakpoint_set -i 5 -t line -f file://xxx/info.php -n 3
> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="5"><error code="5"><message><![CDATA[command is not available]]></message></error></response>
"
According to Xdebug documentation status "stopping" is
'State after completion of code execution. This typically happens at the end of code execution, allowing the IDE to further interact with the debugger engine (for example, to collect performance data, or use other extended commands).'
So my debugger stops before reaching first breakpoint (set on first line).
Could it be a question of server configuration?
You should go to php.ini and delete a line like this
extension=php_xdebug-...
How did this line was created.
You put a xdebug's file into PHP extensions path like this
.../php5.X.XX/ext/
Now you may turn on this PHP extension by any _AMP UI tools like WAMP, XAMPP etc.
To prevent this painful misfortune you must put the Xdebug file into
.../php5.X.XX/zend_ext/
It'll make Xdebug hidden from any _AMP tool.
And correct your zend_extension parameter too.
zend_extension = .../php5.X.XX/ext/php_xdebug-...
to
zend_extension = .../php5.X.XX/zend_ext/php_xdebug-...
It's common default path for it.
Please, remember!
With PHPStorm, Eclipse, Zend etc., possibly you should consider to correct two php.ini files.
The first one for your web server. Commonly under Apache folder
...\Apache2.X.XX\bin\
The second one is for the direct PHP-script debugging. It lies in the PHP hosting folder:
...\php\php5.X.XX\
In my case, the cause of the "breakpoint_set" / "command is not available" problem was disabled xdebug.extended_info option (it is enabled by default but I disabled it for profiling).
Breakpoints do not work then xdebug.extended_info is disabled.
I have got breakpoints worked after reenabling xdebug.extended_info.
I had same problem under windows, with phpstorm, i was googling many time. Eventually, my decision is the:
in php.ini:
xdebug.remote_mode = "jit"
From phpstorm tutorial, JIT - "Just-In-Time" Mode
https://www.jetbrains.com/help/phpstorm/2016.2/configuring-xdebug.html#d43035e303
UPD
No, this option does not helped me actually. But, i resolve my issue in the end:
I use phpstrom for win 7, and i configured the path mapping this way:
d:\serverroot\vhost\www => d:\serverroot\vhost\www
but in my old config i spied such mapping:
d:\serverroot\vhost\www => d:/serverroot/vhost/www
Finally
On windows machines in path mapping in server configuration replace the \ by /
I think the only reason why this could happen is that your info.php has a syntax error. In that case, there is no code to execute and the script goes directly to "stopping" upon issue of the "step_into".
Zend_Opcache / OPCache can cause this issue as well, if you have it enabled try disabling it.
This error can be emitted when the XDebug extension is compiled into a non-debug build of the PHP runtime. The process will not fail (as it shouldn't), but the XDebug extension will stop doing anything for the duration of that process
Summary: Unable to run any of the most simple “Hello World” FastCGI script, any request always terminating into a time out. Seems there is no communication at all between the server and the FastCGI scripts (using dynamic FastCGI scripts).
The environment
Ubuntu Precise (12.04)
Package apache2.2-bin
Package apache2-mpm-prefork
Package libapache2-mod-fastcgi
Package libfcgi-perl
Package python-flup
Multiple sites configured as virtual hosts on 127.0.0.1
There exists a /var/lib/apache2/fastcgi directory, owned by www-data, readable by all (owner, group and others)
There exists a /var/lib/apache2/fastcgi/dynamic directory, owned by www-data, which is restricted to the owner (readable, writable and accessible by www-data only)
There exists an inode/socket file in the /var/lib/apache2/fastcgi/ directory
The FastCGI relevant configurations:
The directory /etc/apache2/mods-enabled/ holds a reference to fastcgi.conf and fastcgi.load (mod_fastcgi is enabled).
The file fastcgi.conf contains the following (left untouched, I did not edit it):
<IfModule mod_fastcgi.c>
AddHandler fastcgi-script .fcgi
#FastCgiWrapper /usr/lib/apache2/suexec
FastCgiIpcDir /var/lib/apache2/fastcgi
</IfModule>
The relevant configuration file in /etc/apache2/sites-enabled/ contains the following (there is nothing more anywhere else about FastCGI specific configuration):
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
The test materials on the test virtual host:
There exist a fcgi-bin/test-perl.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/perl
use CGI::Fast qw(:standard);
$COUNTER = 0;
while (new CGI::Fast) {
print header;
print start_html("Fast CGI Rocks");
print
h1("Fast CGI Rocks"),
"Invocation number ",b($COUNTER++),
" PID ",b($$),".",
hr;
print end_html;
}
There exist a fcgi-bin/test-python.fcgi whose content is (the file is executable by all, and readable by owner and group):
#!/usr/bin/python
def myapp(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World!\n']
try:
from flup.server.fcgi import WSGIServer
WSGIServer(myapp).run()
except:
import sys, traceback
traceback.print_exc(file=open("errlog.txt","a"))
The issue
Although both fcgi-bin/test-perl.fcgi and fcgi-bin/test-python.fcgi runs normally when executed from the command‑line, none seems to work when invoked, e.g. as http://test.loc/fcgi-bin/test-perl.fcgi or http://test.loc/fcgi-bin/test-python.fcgi.
Nothing at all happens, and after some delay, I get an Error 500, and Apache error logs contains multiple entries looking like:
[<date>] [error] [client <IP>] FastCGI: comm with (dynamic) server "/<…>/fcgi-bin/<script>.fcgi" aborted: (first read) idle timeout (30 sec), referer: <referrer>
[<date>] [error] [client <IP>] FastCGI: incomplete headers (0 bytes) received from server "<…>/fcgi-bin/<script>.fcgi", referer: <referrer>
I've spent hours and hours searching the web trying to understand why it does not work, and finally decided to give up and ask for some help here.
Any pointers and check list welcome. Feel free to ask for any missing details you may feel to be relevant or worth checking.
Enjoy a nice day.
-- edit --
Issue update
In my own reply to my own question, I mentioned a weird case where things were looking suddenly fine without reasons. I later discovered this was only partly fine.
In the same virtual host, so with the exact same server configuration, some scripts, which are exactly the same (and with exact same access rights), fails depending on their location.
As a remainder, here is what's in the site configuration:
<DirectoryMatch /fcgi-bin>
Options +ExecCGI
<FilesMatch "^[^\.]+$">
SetHandler fastcgi-script
</FilesMatch>
</DirectoryMatch>
With the above, only scripts in /fcgi-bin are handled as FastCGI script. But I also have some elsewhere (still for testing): one in /cgi-bin and one in / (i.e. in the public_html directory). For this purpose, .htaccess contains this entry:
Options +ExecCGI
AddHandler fastcgi-script .fcgi
So the two others FastCGI script should work the same as the one in /fcgi-bin, but they don't, and for the time, they invariably terminates with a connexion time‑out, just like the one /fcgi-bin first did.
This makes me feel something may be wrong with the mod_fastcgi module (known bug? else?). So far, this module seems to act rather randomly.
-- edit 2 --
The above in the first edit, was an error of mine: the group was wrong with the other scripts, it had to be www-data, but it was not. So is something is wrong, stick to the answer I gave, that is, try to look at the FastCgiConfig, and see if it solve anything or at least if it honours the time‑out options.
I will answer my own question, as it seems to be working now. However, the epilogue still looks weird.
Although the default configuration should be OK, I still wanted to review the “Module mod_fastcgi” document again. As I only wanted a dynamic FastCGI, I focused on the FastCgiConfig directive only, thus on purpose not going into FastCgiServer and FastCgiExternalServer directives.
As there was no FastCgiServer at all in the default fastcgi.conf file, I started to try to set‑up my own. For a first test, I wanted to use the -appConnTimeout option, at least to request the server to not wait so much long before it returns me an Error 500.
So I just added this in the site configuration (I did not touch fastcgi.cong), in the same file where virtual hosts are configured:
FastCgiConfig -appConnTimeout 2
This was to tell the server to wait no more than 2 seconds, instead of the 30 seconds it was waiting. I tried to invoked a FastCGI script to see if at least this configuration was working. I expected to get an error in a 2 seconds delay, but instead, the script ran without error.
What's weird, is that I then tried to remove this option, to check if it was just that addition which was just missing to make FastCGI scripts working. But after I commented‑out the option, it was still working, and the same after a full reboot.
Can't tell more, that looks weird, but this is the only thing I did, I did not edit anything else. I can just suggest people who may encounter a similar issue, to just try the above.
Sorry, if I can't explain what it did exactly. I really would like to know. It just working now, but I don't know why.
#############
fastcgi.conf
FastCgiWrapper Off
peng.rl 's answer solve my problem.
My ceph radosgw can't get apache's input at all. after set FastCgiWrapper Off, I can capture data in wireshark.