C language FastCGI with Nginx - c

I am attempting to run a fastcgi app written in C language behind the Nginx web server. The web browser never finishes loading and the response never completes. I am not sure how to approach it and debug. Any insight would be appreciated.
The hello world application was taken from fastcgi.com and simplified to look like this:
#include "fcgi_stdio.h"
#include <stdlib.h>
int main(void)
{
while(FCGI_Accept >= 0)
{
printf("Content-type: text/html\r\nStatus: 200 OK\r\n\r\n");
}
return 0;
}
Output executable is executed with either one of:
cgi-fcgi -connect 127.0.0.1:9000 a.out
or
spawn-fcgi -a120.0.0.1 -p9000 -n ./a.out
Nginx configuration is:
server {
listen 80;
server_name _;
location / {
# host and port to fastcgi server
root /home/user/www;
index index.html;
fastcgi_pass 127.0.0.1:9000;
}
}

You need to call FCGI_Accept in the while loop:
while(FCGI_Accept() >= 0)
You have FCGI_Accept >= 0 in your code. I think that results in the address of the FCGI_Accept function being compared to 0. Since the function exists, the comparison is never false, but the function is not being invoked.

Here's a great example of nginx, ubuntu, c++ and fastcgi.
http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/
If you want to run his code, I've put it into a git repo with instructions. You can check it out and run it for yourself. I've only tested it on Ubuntu.
https://github.com/homer6/fastcgi

After your application handles fastcgi-requests correctly, you need to take care of starting the application. nginx will never spawn fcgi processes itself, so you need a daemon taking care of that.
I recommend using uwsgi for managing fcgi processes. It is capable of spawning worker-processes that are ready for input, and restarting them when they die. It is highly configurable and easy to install and use.
http://uwsgi-docs.readthedocs.org/en/latest/
Here is my config:
[uwsgi]
fastcgi-socket = /var/run/apc.sock
protocol = fastcgi
worker-exec = /home/app/src/apc.bin
spooler = /home/app/spooler/
processes = 15
enable-threads = true
master = true
chdir = /home/app/
chmod-socket = 777
This integrates nicely as systemd service, but can also run without.

Try with:
$ cgi-fcgi -start -connect localhost:9000 ./hello
It works for me.
I'm using archlinux and following the instructions at:
https://wiki.archlinux.org/index.php/Nginx

You can try this
https://github.com/Taymindis/ngx-c-handler
It is built on top on fastcgi, It handle multiple request, and there are some core feature as well. It can handler function mapping with nginx.
To startup a nginx with c/c++ language
https://github.com/Taymindis/ngx-c-handler/wiki/How-to-build-a-cpp-service-as-c-service-interface

Related

Why does my program work fine by running it directly but not as a service? Linux C

Goodday guys,
I am trying to build and run program in linux (raspberry) as a service.
It is a sample application that uses the Cerence SDK C API that implements a wake-up-word (WUW) plus command utterance recognition.
I can execute it by ./name.exe or using the Makefile commands.
The problem is that when I execute the program by console it works fine, without any problem.
When I try to execute it as a service (using both systemd or crontab and also rc.local), an error occours.
This is the function that gives me error:
printf("Selecting audio configuration %s\n", audioScenarioName);
rc = nuance_audio_IAudioManager_activateScenario(audioMgr, audioScenarioName);
if (NUANCE_COMMON_OK != rc) {
printf("Audio scenario activation failed: %d\n", rc); <-- returns 1 (error, impossible to activate scenario)
return rc;
}
ActivateScenario it's a function that simply selects the correct mic (audioScenarioName) following a JSON file and the audio manager (audioMgr).
Unfortunately this function returns 1 if something goes wrong, closes the program and nothing else.
This is the JSON:
"type": "AudioInput",
"name": "mic_input",
"adapter_type": "CUSTOM_AUDIO",
"adapter_params": {
"device_name": "default"
},
"audio_format": { "uses": "16khz_1ch" }
The service should be running as root permissions (default).
I also tried by setting the whole folder as chmod -R 777 as a test, but same problem.
This is my service:
[Unit]
Description=My Service
[Service]
Type=simple
ExecStart=+/home/pi/.../nameexec
Restart=on-failure
RestartSec=5
KillMode=process
[Install]
WantedBy=multi-user.target
I've also set the absolute path of its lib directory that it needs into the ld.so.conf file.
The only libraries I put in it are the .so ones, but not .h.
I am now trying to understand what might be different about starting the same executable but in different ways.
Could it be a permissions issue? Or is it not detecting the microphone? Any library out of place?
I really don't know why it works with the classic command and not as a service.
Can someone please help me with this?
Thank you in advance!
I succeeded!
The problem was the microphone being used.
Using Raspbian ver. Desktop, I set the mic from the bottom right part of the taskbar and changed the defaults in/out.
But these settings seem to be not system-wide and not used by the services in background (even though the "User=" is set to "pi").
So I had to change alsa.conf file:
sudo nano /usr/share/alsa/alsa.conf
Then find and edit these lines:
defaults.ctl.card cardnumber
defaults.pcm.card cardnumber
You can find the card number by running arecord -l.

Running C code using FastCGI & NGINX

I am trying to run C code using FastCGI and NGINX. Right now, after following all the steps in this website: http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/
I am at the step where I am about to spawn-fcgi. However, the system that I must use is a 32 bit system where commands such as sudo apt-get install are not supported. I tried copying over the spawn-fcgi binary from my 64 bit system and tried using that like this: ./spawn-fcgi -p 8000 -n hello_world command but it is giving me an error saying it cannot execute the binary file (I'm assuming it is because I am for sure on a 32 bit system when trying to use it). In fact, when I executed file spawn-fcgi it told me that it was a 64-bit LSB executable, and as I am running it on a 32-bit system, that's why the "Cannot execute binary file" error is there.
What I'm wondering is if there is anyway I could run a C script using FASTCGI without calling on spawn-fcgi or cgi-fcgi or if there is anyway I could use somehow get these binaries in 32-bit. I tried searching online for 32-bit downloads of FASTCGI but it seems like fastcgi.com is broken as I am unable to access the website.
Please let me know if I've left out any crucial information and I'll be glad to provide it. Thanks!
By using the API provided by <fcgiapp.h> header, you can specify the socket details, which spawning via external means does for you.
You can get a TCP socket file descriptor like this:
int sockfd = FCGX_OpenSocket("127.0.0.1:9000", 100);
...or using Unix sockets:
int sockfd = FCGX_OpenSocket("/var/run/fcgi.sock", 100);
With the socket you can then:
FCGX_Request req;
FCGX_InitRequest(&req, sockfd, 0);
while (FCGX_Accept_r(&req) >= 0) {
FCGX_FPrintF(req.out, "Content-Type: text/html\n\n");
FCGX_FPrintF(req.out, "hello world");
FCGX_Finish_r(&req);
}
Once you compile, you can execute the binary directly without using spawn-fcgi or cgi-fcgi.

How to debug cgi program written in C and running in Apache2?

I have a complex cgi executable written in C, I configured in Apache2 and now it is running succesfully. How can I debug this program in the source code, such as set break points and inspect variables? Any tools like gdb or eclipse? Any tutorial of how to set up the debugging environment?
Thanks in advance!!
The CGI interface basically consists in passing the HTTP request to the executable's standard input and getting the response on the standard output. Therefore you can write test requests to files and manually execute your CGI without having to use Apache. The debugging can then be done with GDB :
gdb ./my_cgi
>> break some_func
>> run < my_req.txt
with my_req.txt containing the full request:
GET /some/func HTTP/1.0
Host: myhost
If you absolutely need the CGI to be run by Apache it may become tricky to attach GDB to the right process. You can for example configure Apache to have only one worker process, attach to it with gdb -p and use set follow-fork-mode child to make sure it switches to the CGI process when a request arrives.
I did this: in cgi main i added code to look for an existing file, like /var/tmp/flag. While existing, i run in a loop. Time enough to attach to cgi process via gdb. After then i delete /var/tmp/flag and from now I can debug my cgi code.
bool file_exists(const char *filename)
{
ifstream ifile(filename);
return ifile;
}
int cgiMain()
{
while (file_exists ("/var/tmp/flag"))
sleep (1);
...
your code
Unless FastCGI or SCGI is used, the CGI process is short-lived and you need to delay its exit to have enough time to attach the debugger while the process is still running. For casual debugging the easiest option is to simply use sleep() in an endless loop at the breakpoint location and exit the loop with the debugger once it is attached to the program.
Here's a small example CGI program:
#include <stdio.h>
#include <unistd.h>
void wait_for_gdb_to_attach() {
int is_waiting = 1;
while (is_waiting) {
sleep(1); // sleep for 1 second
}
}
int main(void) {
wait_for_gdb_to_attach();
printf("Content-Type: text/plain;charset=us-ascii\n\n");
printf("Hello!");
return 0;
}
Suppose it is compiled into cgi-debugging-example, this is how you would attach the debugger once the application enters the endless loop:
sudo cgdb cgi-debugging-example $(pgrep cgi-debugging)
Next you need to exit the infinite loop and wait_for_gdb_to_attach() function to reach the "breakpoint" in your application. The trick here is to step out of sleep functions until you reach wait_for_gdb_to_attach() and set the value of the variable is_waiting with the debugger so that while (is_waiting) exits:
(gdb) finish
Run till exit from 0x8a0920 __nanosleep_nocancel () at syscall-template.S:81
0x8a07d4 in __sleep (seconds=0) at sleep.c:137
(gdb) finish
Run till exit from 0x8a07d4 in __sleep (seconds=0) at sleep.c:137
wait_for_gdb_to_attach () at cgi-debugging-example.c:6
Value returned is $1 = 0
(gdb) set is_waiting = 0 # <<<<<< to exit while
(gdb) finish
Run till exit from wait_for_gdb_to_attach () cgi-debugging-example.c:6
main () at cgi-debugging-example.c:13
Once you are out of wait_for_gdb_to_attach(), you can continue debugging the program or let it run to completion.
Full example with detailed instructions here.
I'm not sure how to use gdb or other frontends in eclipse, but I just debugged my CGI program with gdb. I'd like to share something that other answers didn't mention, that CGIs usually need to read request meta-variables defined in RFC 3875#4.1 with getenv(3). Popular request variables in my mind are:
SCRIPT_NAME
QUERY_STRING
CONTENT_LENGTH
CONTENT_TYPE
REMOTE_ADDR
There variables are provided by http servers such as Apache. When debugging with gdb, we need to set these values by our own with set environment. In my case, there're only a few variables neededa(and the source code is very old, it still uses SCRIPT_URL instead of SCRIPT_NAME), so here's my example:
gdb cgi_name
set environment SCRIPT_URL /path/to/sub/cgis
set environment QUERY_STRING p1=v1&p2=v2
break foo.c:42
run
For me both solutions for debugging the CGI in gdb without web server presented above didn't work.
Maybe the second solution works for a GET Request.
I needed a combination of both, first setting the environment variables from rfc3875 (not sure if all of them are really neded).
Then I was able to pass only the params (not the compltete request) via STDIN from a file.
gdb cgi_name
set environment REQUEST_METHOD=POST
set environment CONTENT_LENGTH=1337
set environment CONTENT_TYPE=application/json
set environment SCRIPT_NAME=my.cgi
set environment REMOTE_ADDR=127.0.0.1
run < ./params.txt
With params.txt:
{"user":"admin","pass":"admin"}

How to solve "ptrace operation not permitted" when trying to attach GDB to a process?

I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.

How to FastCGI in C?

I have a website where each webpage is compiled into a binary (I have 100 webpages, therefore I have 100 binaries). Apache's .htaccess contains the line "SetHandler cgi-script" which instructs apache to use CGI when a binary (webpage) is requested.
How can I modify this website to use FastCGI instead of CGI ?
Do I just have to include this header and use this while loop (FastCGI.com) in each of the 100 binaries and modify .htaccess to "SetHandler fastcgi-script" ?
#include "fcgi_stdio.h" // instead of stdio.h
while(FCGI_Accept() >= 0)
So how will FastCGI work exactly ? Apache will dispatch webpages using 1 persistent process for the entire website or will there be 1 persistent process for each of the 100 binaries ?
A FastCGI script is a network server that listens for connections in a loop. The web server forward requests to the FCGI server which sends back some dynamically generated content - all over a socket connection. Thus a FCGI script is faster than CGI as it is not re-spawned for each request.
I don't understand why you need 100 binaries for 100 pages. A single script is enough to generate content for 100 pages, based on some request parameter. The FCGI server should also scale pretty well for multiple connections as it is usually made to poll on the socket file descriptor. (Look at the code of the implementation to make sure of this).
To generate 100 pages you don't necessarily need 100 if statements. Consider this pseudo-code:
hash_table page_generators; // map page types to function objects (or function pointers)
page_generators["login_page"] = handle_login_page_fn;
page_generators["contact_page"] = handle_contact_page_fn;
// ... and so on
// request handler
page_type = request.get("page_type");
fn = page_generators[page_type];
if (fn == NULL)
return "<html><body>Invalid request</body></html>";
else
return fn(request);

Resources