Perl CGI with C Wrapper - c

Basis of the problem. I have a university assignment requiring me to write a Perl/CGI based website for a phonebook. This part is fine and I'm happy with it, however, I have issues with wrapping the cgi files. I've done it once before no issues but been unable to replicate that success this time doing the same thing.
Basic Perl file to show user ID's:
#!/usr/bin/perl -w
use English;
print "Content-type: text/html";
print "\n";
print "\n";
print "\n";
print "<html>\n";
print "<head><title>IDS.CGI</title></head>\n";
print "<body>\n";
print "<p>\nMy User ID is $UID\n</p>";
print "<p>\nMy effective User ID is $EUID\n</p>";
print "<p>\nMy Group ID is $GID\n</p>";
print "<p>\nMy effective Group ID is $EGID\n</p>";
print "\n</body>\n";
print "</html>\n";
Wrapper.C:
#include <stdio.h>
#include <unistd.h>
#define REAL_PATH "ids.pl"
int
main()
{
execl( REAL_PATH, REAL_PATH, 0 );
printf( "You should never see this message!\n" );
}
This is throwing an internal server error 500. I've tried my best to debug it including spacing for the headers etc. It runs fine in terminal but not in the web browsers. The servers httpd error log shows that the error of "Premature end of headers". I cannot see how there is a premature end however.
Any help anyone can offer would be greatly appreciated.

Like any system call, you should always be checking for an error from execl(). Normally you'd look at the return value, but it's not necessary because success will terminate the program.
execl( REAL_PATH, REAL_PATH, 0 );
perror("exec of '"REAL_PATH"' failed");
This uses perror to handle turning errno into a human readable error string and printing it to stderr.
I'd also avoid using #define for string constants, they're awkward to work with. Instead use static const char REAL_PATH[] = "ids.pl" as suggested in this answer.
And I don't understand why you need a C wrapper. Some sort of weird restriction on your web server running interpreted code?

I wasn't compiling the C Wrapper on the actual server therefore getting different machine codes which weren't compatible. Unfortunately, the server denied to compile it originally and I forgot when it did that it had to be compiled on that machine. Doh me!

Related

DebugPrint pushes empty line (Serial connection)

Upon deciding to write a simple "Hello World!" program in EDK2,
I stumbled upon the following problem:
As I am using a serial connection for debugging, the output of the debug functions like DebugPrint successfully get redirected to my serial terminal (PuTTY in this case), well sort of.
After compiling an executing the following program inside an UEFI shell, I simply get
an empty line as a result.
But after executing the same binary again, the line gets successfully printed in all it's beauty.
This is the source code of the program i ran:
#include <Uefi.h>
#include <Library/DebugLib.h>
EFI_STATUS
efi_main(EFI_HANDLE ImageHandle,
EFI_SYSTEM_TABLE* SystemTable
)
{
DebugPrint(DEBUG_INFO, "Hello World!\n");
return EFI_SUCCESS;
}
Serial output:
Note: I linked my program against IoLib, SerialPortLib and DebugLib
What could be causing this issue?
After a lot of fiddling around I realised, that I manually specified the entry point to my main-function (efi_main), which should instead point to _ModuleEntryPoint when using the UefiDriverEntryPoint library from EDK2.
This solved my problem instantly :)

Detecting if stdout is a console with MS Visual compilation, with console provided by mingw64

I maintain a command line utility that generates binary data. Data can be redirected towards stdout when requested.
This is valid when stdout is redirected to a pipe or a file,
but less so when stdout is a console, as it would garbage the console.
In order to protect users against such mistake, the program must detect if stdout is a console or not, and bail out when it is.
Now, this is nothing new, and a quick look over Internet will find multiple solutions. The main drawback is that there is no "universal" method, and Visual Studio requires its own flavor.
The console detector I'm using for Visual has a flaw : it doesn't detect that stdout is a console when the console is provided by mingw, which I believe means that it is mintty.
Here is the relevant code section :
#if defined(WIN32) || defined(_WIN32)
# include <io.h> /* _isatty */
# include <windows.h> /* DeviceIoControl, HANDLE, FSCTL_SET_SPARSE */
# include <stdio.h> /* FILE */
static __inline int IS_CONSOLE(FILE* stdStream) {
DWORD dummy;
return _isatty(_fileno(stdStream)) &&
GetConsoleMode((HANDLE)_get_osfhandle(_fileno(stdStream)), &dummy);
}
#endif
Note that the console detector works fine with the built-in Windows console (conhost.exe). It also works fine when the binary is compiled by mingw64. So the issue is mostly "compiled with Visual + console is mintty".
I've been looking around for some potential backup solutions, and found multiple variants of console detector for Visual, using different logics. But none of them would identify mintty as a console, they all fail.
I'm wondering if it is a problem with mintty, though I would anticipate that if it is, more applications would be impacted. Yet, searching for such issue over Internet yields relatively little complaints, and no solution.
Is it a known issue ?
Is there a known solution ?
mintty is a terminal emulator and does not present a console to a running application. When I need to run a true console program I've got to use winpty. For example winpty powershell will allow powershell to run correctly within mintty.
It is a known issue that several applications like git works around. This is what I also found.
https://github.com/fusesource/jansi-native/issues/11.
https://github.com/fusesource/jansi-native/commit/461068c67a38647d2890e96250636fc0117074f5
So apparently you should also check if you are connected to a pipe with the following name:
/*
* Check if this could be a MSYS2 pty pipe ('msys-XXXX-ptyN-XX')
* or a cygwin pty pipe ('cygwin-XXXX-ptyN-XX')
*/

Nodejs exec for a C compiled binary displays stderr on stdout?

I have basically a C compiled binary wherein if an error is encountered during the execution, the error is dumped out to stderr. This C Binary is wrapped around NodeJS, where the binary is invoked via child process exec. But upon error, even though C code dumps out the information to stderr, I still seem to get the information in Nodejs on stdout, and not on stderr. So, essentially running console.log(stdout); dumps out the error information but console.log(stderr); dumps nothing. Does anyone have any idea on this, and if I need to redirect this information through a different medium so I get appropriate information on stdout and stderr on NodeJS script?
I created a test version of the code and it seems to display the information correctly on stderr and stdout:
#include <stdio.h>
int main(){
fprintf(stderr, "Whoops, this is stderr");
fprintf(stdout, "Whoops, this is stdout");
return 0;
}
and corresponding NodeJS Code:
#!/usr/bin/env node
var exec = require('child_process').exec,
path = require('path');
var bin = path.join(__dirname, 'a.out');
var proc = exec(bin, function (error, stdout, stderr) {
console.log('stdout:', stdout);
console.log('stderr:', stderr);
});
proc.stdout.on('data', function (dat) { console.log(dat) });
and this is the output I get:
Whoops, this is stdout
stdout: Whoops, this is stdout
stderr: Whoops, this is stderr
Not sure why it would happen so in my code, May be because I am dumping a lot of information to stdout and stderr simultaneously or there is some buggy module I have included that may be causing this to happen. The actual code is quite big to be written here, but seems like I have to investigate where it might be going wrong.
I seem to have figured out the problem. The legacy C Code that dumps out the information was never referring to the FILE * being passed onto it. That was the reason all the information appeared on stdout and not on stderr. Fixed the API to take FILE * as an argument and dump out the information to correct FILE pointer and now it works.

Simple C Wrapper Around OpenSSH (with Cygwin on Windows)

I am packaging a program on Windows that expects to be able to externally call OpenSSH. So, I need to package ssh.exe with it and I need to force ssh.exe to always be called with a custom command line parameter (specifically -F to specify a config file it should use). There is no way to force the calling program to do this, and there are no simple ways to do this otherwise in Windows (that I can think of anyway - symlinks or cmd scripts won't work) so I was just going to write a simple wrapper in C to do it.
This is the code I put together:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
int ret;
char **newv = malloc((argc + 2) * sizeof(*newv));
memmove(newv, argv, sizeof(*newv) * argc);
newv[argc] = "-F ssh.conf";
newv[argc+1] = 0;
ret = execv("ssh.orig.exe", newv);
printf("execv failed with return value of %i", ret);
return ret;
}
I then compile this code using GCC 4.6.3 in Cygwin and it runs without error; however, there is a strange behavior with regards to input and output. When you go to type at the console (confirming the authenticity of the host and entering in a password, etc) only part of the input appears on the console. For example, if I type in the word 'yes' and press enter, only the 'e' will appear on the console and SSH will display an error about needing to type 'yes' or 'no'. Doing this from the Windows command prompt will result in your input going back to the command propmt, so when you type 'yes' and press enter, you get the ''yes' is not recognized as an internal or external command...' message as if the input were being typed at the command prompt. Eventually SSH will time out after that.
So, I'm obviously missing something here, and I'm assuming it has something to do with the way execv works (at least the POSIX Cygwin version of it).
Is there something I'm missing here or are there any alternatives? I was wondering if maybe I need to fork it and redirect the I/O to the fork (although fork() doesn't seem to work - but there are other issues there on Windows). I tried using _execv from process.h but I was having issues getting the code right for that (also could have been related to trying to use gcc).
It's also possible that there may be a non-programming way to do this that I haven't thought of, but all of the possibilities I've tried don't seem to work.
Thoughts?
I ended up finding a solution to this problem. I'm sure there were other ways to do this, but this seems to fix the issue and works well. I've replaced the execv line with the following code:
ret = spawnv(P_WAIT, "ssh.orig.exe", newv);
You have to use 'P_WAIT' otherwise the parent process completes and exits and you still have the same problem as before. This causes the parent process to wait, but still transfers input and output to the child process.

getlogin() c function returns NULL and error "No such file or directory"

I have a question regarding the getlogin() function (). I tried to get the login name of my account from the c program using this function. But the function returns a NULL. Using perror shows that the error is "No such file or directory".
I don't get what is the problem. Is there a way to get user login name in a program.
Here is a sample code:
#include <stdio.h>
#include <unistd.h>
int main()
{
char *name;
name = getlogin();
perror("getlogin() error");
//printf("This is the login info: %s\n", name);
return 0;
}
And this is the output: getlogin() error: No such file or directory
Please let me know how to get this right.
Thanks.
getlogin is an unsafe and deprecated way of determining the logged-in user. It's probably trying to open a record of logged-in users, perhaps utmp or something. The correct way to determine the user you're running as (which might not be the same as the logged-in user, but is almost always better to use anyway) is getpwuid(getuid()).
Here is a good link I found explaining that it may not work: getlogin
Here is a quote from it:
Unfortunately, it is often rather easy to fool getlogin(). Sometimes it does not work at all, because some program messed up the utmp file
It works fine for me if I comment perror call.
From man:
getlogin() returns a pointer to a string containing the name of the user logged in on the controlling terminal of the process, or a null pointer if this information cannot be determined.'
So you should do:
#include <stdio.h>
#include <unistd.h>
int main()
{
char *name;
name = getlogin();
if (!name)
perror("getlogin() error");
else
printf("This is the login info: %s\n", name);
return 0;
}
According to the man page the error (ENOENT) means:
There was no corresponding entry in the utmp-file.
I typically use getpwent() along with a call to geteuid() and getegid(). This gives me all of the information that I might possibly need to know (at least as far as /etc/passwd has to offer) and tells me if I'm running as setuid / setgid, which is helpful when programming defensively.
I have written several programs for my company that outright refuse to work if someone tries to setuid them and change ownership to root, or refuse to run as root if being called by a system user (www-data, nobody, etc).
As others have said, reading from utmp is a very bad idea for this purpose.

Resources