I'm having trouble getting an FCGI application working in C using nginx. I'm using spawn-fcgi to create the socket and run my application (which I have named paste)
I'm thinking it must be a problem with my application, but I'm fairly certain I copied all the relevant parts from the example source located here.
This is the error nginx is giving me:
[error] 53300#0: *4 upstream prematurely closed connection while
reading response header from upstream, client: 127.0.0.1, server:
localhost, request: "GET /test HTTP/1.1", upstream:
"fastcgi://unix:/tmp/cfcgi.sock:", host: "localhost"
Here's the source for the application:
#include <fcgi_stdio.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv) {
while(FCGI_Accept() >= 0) {
printf("Content-type: text/html\r\n\r\n");
printf("Hostname: %s", getenv("SERVER_HOSTNAME"));
}
return EXIT_SUCCESS;
}
The relevant config changes in nginx:
location /test {
include fastcgi_params;
fastcgi_pass unix:/tmp/cfcgi.sock;
}
And the spawn-fcgi command:
spawn-fcgi -s /tmp/cfcgi.sock -M 0777 -P cfcgi.pid -- paste
The variables that Nginx passes to a FastCGI server are listed in /etc/nginx/fastcgi_params, which you include in your configuration file. And there is no such variable as SERVER_HOSTNAME. The closest one is SERVER_NAME.
The function getenv() returns 0 if the requested variable is not found, which happens in your case. Then this value is referenced by printf (%s), which causes a segmentation fault to occur.
So, to fix the problem you can either add the parameter SERVER_HOSTNAME to your fastcgi_params file (don't forget to reload Nginx after that), or replace SERVER_HOSTNAME with SERVER_NAME in your application.
Related
I've setup a TCP server with docker for a CTF competition I'm going to be hosting. The problem is, when I nc (netcat) into the running TCP Server on my localhost machine, the client does not receive the output of the executable.
I've never setup a TCP Server in the past, so this is new to me.
Bash Script
#!/bin/sh -e
exec tcpserver -v -P -R -H -l 0 0.0.0.0 1337 ./buff
Short C Script
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define FLAG ""
int main() {
int safe = 0xFACE0FF;
char name[32] = "";
fprintf(stderr, "So you wanna take my flag? ");
read(0, name, 0x32);
if (safe == 0xdec0de) {
fprintf(stderr, "Here's my flag: %s", FLAG);
} else {
puts("Goodluck dude!");
}
}
I want the client to read and send input. From the above C script.
The above bash script creates a successful TCP Server and listens for any incoming connections on port 1337, however when the client connects to the TCP Server, they can only pass input.
Check the man page for read(). The read function will wait until it's read the given number of bytes - in your case you specify 0x32 bytes (which is probably not what you expect). That's why it seems like it doesn't put output - it's waiting for 50 input characters before it continues.
You might want to consider getline() or something similar instead.
Also, check to make sure you use the same base on variable declaration as buffer sizes.
I have a knative setup with two worker nodes. After successfully testing helloworld-go sample program. I tried to wrote a simple C program that just prints environment variable "REQUEST_METHOD" and "QUERY_STRING". However pod status is "CrashLoopBackOff" and kubectl describe shows: " Readiness probe failed: Get http://192.168.203.181:8022/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)". I suspect something is wrong with my code. If possible please suggest what is wrong with my code.
Dockerfile I used is this:
FROM gcc:4.9
COPY . /usr/src/test-c
WORKDIR /usr/src/test-c
RUN gcc -o test-c main.c
CMD ["./test-c"]
My C code for the same is:
#include <stdio.h>
#include <stdlib.h>
int main() {
char* len_ = getenv("REQUEST_METHOD");
printf("METHOD = %s, %s\n", len_, getenv("QUERY_STRING"));
}
My yaml file is:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-c
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/avinashkrc/helloworld-c:latest
env:
- name: TARGET
value: "Go Sample"
I am able to solve the problem using pistache for REST api (I guess any framework should work).
I'm trying to submit a simple HTTP GET request in WebAssembly. For this purpose, I wrote this program (copied from Emscripten site with slight modifications):
#include <stdio.h>
#include <string.h>
#ifdef __EMSCRIPTEN__
#include <emscripten/fetch.h>
#include <emscripten.h>
#endif
void downloadSucceeded(emscripten_fetch_t *fetch) {
printf("Finished downloading %llu bytes from URL %s.\n", fetch->numBytes, fetch->url);
// The data is now available at fetch->data[0] through fetch->data[fetch->numBytes-1];
emscripten_fetch_close(fetch); // Free data associated with the fetch.
}
void downloadFailed(emscripten_fetch_t *fetch) {
printf("Downloading %s failed, HTTP failure status code: %d.\n", fetch->url, fetch->status);
emscripten_fetch_close(fetch); // Also free data on failure.
}
unsigned int EMSCRIPTEN_KEEPALIVE GetRequest() {
emscripten_fetch_attr_t attr;
emscripten_fetch_attr_init(&attr);
strcpy(attr.requestMethod, "GET");
attr.attributes = EMSCRIPTEN_FETCH_LOAD_TO_MEMORY;
attr.onsuccess = downloadSucceeded;
attr.onerror = downloadFailed;
emscripten_fetch(&attr, "http://google.com");
return 1;
}
When I compile it using $EMSCRIPTEN/emcc main.c -O1 -s MODULARIZE=1 -s WASM=1 -o main.js --emrun -s FETCH=1 I get the error
ERROR:root:FETCH not yet compatible with wasm (shared.make_fetch_worker is asm.js-specific)
Is there a way to run HTTP requests from WebAssembly? If yes, how can I do it?
Update 1: The following code attempts to send a GET request, but fails due to CORS issues.
#include <stdio.h>
#include <string.h>
#ifdef __EMSCRIPTEN__
#include <emscripten/fetch.h>
#include <emscripten.h>
#endif
unsigned int EMSCRIPTEN_KEEPALIVE GetRequest() {
EM_ASM({
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://google.com");
xhr.send();
});
return 1;
}
No, you cannot execute HTTP request from WebAssembly (or access DOM, or any other browser APIs). WebAssembly by itself doesn’t have any access to its host environment, hence it doesn’t have any built in IO capabilities.
You can however export functions from WebAssembly, and import functions from the host environment. This will allow you to make HTTP requests indirectly via the host.
I recently ran across this issue with esmcripten fixed it in: https://github.com/kripken/emscripten/pull/7010
You should now be able to use FETCH=1 and WASM=1 together.
Unfortunately, there is no way to make a CORS request to Google.com from any website other than Google.com.
From MDN:
For security reasons, browsers restrict cross-origin HTTP requests initiated from within scripts. For example, XMLHttpRequest and the Fetch API follow the same-origin policy. This means that a web application using those APIs can only request HTTP resources from the same origin the application was loaded from, unless the response from the other origin includes the right CORS headers.
Google does not include those headers.
Because JavaScript/WebAssembly runs on the client's machine (not yours) you could do nasty things if this wasn't in place, like make POST requests to www.mybankingwebsite.com/makeTransaction with the client's cookies.
If you want to point the code you have in Update 1 to your own site, or run it on Node.js, it should work fine.
I am trying to create simple FastCGI app written in C:
#include <fcgiapp.h>
#include <stdio.h>
int main()
{
int sockfd = FCGX_OpenSocket("127.0.0.1:9000", 1024);
FCGX_Request request;
FCGX_Init();
FCGX_InitRequest(&request, sockfd, 0);
while (FCGX_Accept_r(&request) == 0)
{
printf("Accepted\n");
FCGX_FPrintF(request.out, "Content-type: text/html\r\n\r\n<h1>Hello World!</h1>");
FCGX_Finish_r(&request);
}
}
It works fine - when i call it from browser, it displays page with "Hello World!" message.
The problem is that code inside a "while" works twice, i.e. I see following output in terminal:
[root#localhost example]$ ./hello
Accepted
Accepted
Why it prints "Accepted" twice per each request?
If I put real code here, e.g. query a DB, it will be executed twice as well.
What program are you using to make the web request? Some web clients may also request an e.g. favicon.ico file, which may account for the second request to the fcgi process:
127.0.0.1 - - [29/Aug/2015:16:02:11 +0000] "GET / HTTP/1.1" 200 32 "-" "..."
127.0.0.1 - - [29/Aug/2015:16:02:11 +0000] "GET /favicon.ico HTTP/1.1" 200 32 "http://127.0.0.1/" "..."
This could be determined by inspecting the webserver logs, or adding debugging to the C program to show the request parameters. Using just telnet to request GET / HTTP/1.0, I do not see a double hit for a forward-everything-to-fcgi webserver configuration using the nginx webserver.
I am attempting to run a fastcgi app written in C language behind the Nginx web server. The web browser never finishes loading and the response never completes. I am not sure how to approach it and debug. Any insight would be appreciated.
The hello world application was taken from fastcgi.com and simplified to look like this:
#include "fcgi_stdio.h"
#include <stdlib.h>
int main(void)
{
while(FCGI_Accept >= 0)
{
printf("Content-type: text/html\r\nStatus: 200 OK\r\n\r\n");
}
return 0;
}
Output executable is executed with either one of:
cgi-fcgi -connect 127.0.0.1:9000 a.out
or
spawn-fcgi -a120.0.0.1 -p9000 -n ./a.out
Nginx configuration is:
server {
listen 80;
server_name _;
location / {
# host and port to fastcgi server
root /home/user/www;
index index.html;
fastcgi_pass 127.0.0.1:9000;
}
}
You need to call FCGI_Accept in the while loop:
while(FCGI_Accept() >= 0)
You have FCGI_Accept >= 0 in your code. I think that results in the address of the FCGI_Accept function being compared to 0. Since the function exists, the comparison is never false, but the function is not being invoked.
Here's a great example of nginx, ubuntu, c++ and fastcgi.
http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/
If you want to run his code, I've put it into a git repo with instructions. You can check it out and run it for yourself. I've only tested it on Ubuntu.
https://github.com/homer6/fastcgi
After your application handles fastcgi-requests correctly, you need to take care of starting the application. nginx will never spawn fcgi processes itself, so you need a daemon taking care of that.
I recommend using uwsgi for managing fcgi processes. It is capable of spawning worker-processes that are ready for input, and restarting them when they die. It is highly configurable and easy to install and use.
http://uwsgi-docs.readthedocs.org/en/latest/
Here is my config:
[uwsgi]
fastcgi-socket = /var/run/apc.sock
protocol = fastcgi
worker-exec = /home/app/src/apc.bin
spooler = /home/app/spooler/
processes = 15
enable-threads = true
master = true
chdir = /home/app/
chmod-socket = 777
This integrates nicely as systemd service, but can also run without.
Try with:
$ cgi-fcgi -start -connect localhost:9000 ./hello
It works for me.
I'm using archlinux and following the instructions at:
https://wiki.archlinux.org/index.php/Nginx
You can try this
https://github.com/Taymindis/ngx-c-handler
It is built on top on fastcgi, It handle multiple request, and there are some core feature as well. It can handler function mapping with nginx.
To startup a nginx with c/c++ language
https://github.com/Taymindis/ngx-c-handler/wiki/How-to-build-a-cpp-service-as-c-service-interface