Simple HTTPS to HTTP proxy - google-app-engine

I'm trying to test out U2F on Google Appengine.
Unfortunately dev_appserver.py, the development app server for local testing, only runs in HTTP, and the U2F standard requires that the web server be connected over HTTPS.
There are some options for proxy servers, including stunnel, stud, Pound and ngrok.
What I am doing will probably end up being an open source package, so I would like to keep the setup fairly straightforward and dependency list strictly to widely available packages.
An ideal solution would be a command-line program along the lines of prog_name -listen localhost:8041 -proxy localhost:8040; in other words, a very simple command line setup.
The stud and pound programs seem like overkill.
The stunnel option seems to be the best and most common solution, but it would be better if it could be configured from the command line instead of a config file.
Ngrok is super-cool and seems to be along the right lines. It gives you a random server name though, which can be a problem since the U2F appId must match the server (if persistence matters), but other than that it's basically the right idea.
I have a vague memory of this being possible with openssl from the command line, but the only command that seems suitable is s_server and that seems to only provide ssl reflection/debugging information and not the option to proxy requests per se. My memory must be faulty.
It would not be terribly difficult to write-up a trivial Python server/client proxy, leading me to believe there's probably a simple option out there ... however, the search results have a dreadful signal-noise ratio.
Are there other sensible options for developing with a HTTPS server when the content is served over HTTP (as the case is with AppEngine's dev_appserver.py)?

Related

Send data from local webpage to C program running locally

I'm looking for the simplest possible (cross-platform, but not necessarily cross-browser) code to send data from a local web page to a C (not C++) application running locally. Basically, I have an HTML page with a form and I want to send the data from that form to another process in the simplest way possible. (I know that I can read local data from a webpage relatively easily, especially now with HTML5, but writing outside of the javascript sandbox is a mystery.)
I know that browsers make this very hard to do for security concerns, and I don't want to open up my machine to attacks, but maybe I can run a very simple server inside the C application to receive the submitted data... Either way, I cannot run any standard webserver, so I need to have a C library/app that does it for me.
I've looked into .hta files (seem to only work for Windows) and some C web servers (all I've found are *nix specific). A similar question is how to transfer of data from webpage to a server c program , except that user allows the use of Java and other webserver platforms (I must use C).
UPDATE: Promising libraries: https://stackoverflow.com/questions/175507/c-c-web-server-library
Have you considered FastCGI? I have a fast CGI library written in C that might be helpful. It still needs a lot of work and I'm not sure if I would want to use in a production environment.
If you find any bugs or make any enhancements, please share them so that it can help others.
https://github.com/manvscode/shrewd-cgi
You could write a very simple web server in C, serve the page from it (avoids security issues), and post the form to it.
If you're bound to c, you'll have to go low-level and deal with all the nifty details around the sockets library. (There's a reason why people abstract that in high-level languages). Check out some example code for RPC in C with server and client here. If you can afford to bind to C, e.g. using Tcl, i would implement the server in a tcl script and bind your C functions as a Tcl command. That way you pass the content directly to your c method while avoiding to write all the sockets code low-level.
Send the desired data from web to specific port of your system (for example port X). Then run your application (e.g. APP) in background using following command:
nc -l X | ./APP
And of course you need nc package.

Very simple DNS server

I have a linux server has an ad-hoc wireless network for clients to connect to. Once connected I want users to always be redirected to it's own web server no matter what URL they type in. The large solution would be to set up a full DNS server (with BIND or equivalent) but that seems like overkill. All I need is a simple program that will listen for any DNS request and always respond with the same IP address.
I looked around for one but couldn't seem to find one. It would preferably be written in C or Perl as I don't really want to install any other scripting languages.
Use Net::DNS::Nameserver and write your own reply handler.
For C, look at:
How to Build a custom simple DNS server in C/C++
Create custom DNS name server in C
I would suggest using dnsmasq. It's more full-featured than you absolutely need, but it's very well-written, small, and easy to install, and the only configuration you would need to give it is --address='/#/1.2.3.4' to tell it to answer all queries (that don't match some other rule) with the address 1.2.3.4. dnsmasq is well-known and maintained and probably a more robust server than Net::DNS::Nameserver.
I've used fakedns.py when reversing malware. It may be too limited for your situation.
As I answered in the other related question, I wrote a basic DNS server in C++ for a job interview under BSD license.
I think the code was pretty clean, though I didn't made unit tests :-(
I tested it with dig, and it took about a week understanding DNS protocol + implementing + documentation.
If anyone would want to extend it, I guess it would not be very difficult.
Because I think it only supported inverse queries, as that was asked in the exercise.
The code could be found here:
http://code.google.com/p/dns-server/
It was migrated to: https://github.com/tomasorti/dns-server

Web authentication through C program

First of all, the goals are not security nor user-friendliness. (Meaning no visual crap and no password encoding/ mega security stuff)
Server-side I want the simplest thing possible. Just a way to authenticate some ~5 users but knowing who they are when they do. Once they are authenticated I'll serve them a file (I haven't decided yet what, .txt or xml or something) and they won't be able to do anything else.
So from the program standpoint, I need to connect to my server, authenticate somehow, get a file, and disconnect. The user only interacts with the program with a simple user/pass combo. The rest is automatic. I was looking to libcurl for the connection+authentication+download, but I would like to hear suggestions because from this list: libcurl competitors, there seems to be much offer available.
I think of it as the same as when I do sudo aptitude install, but the sudo part would go on the server if that makes any sense.
So my question is, how can I make a page with an authentication (note that it doesn't have to have any visual output) which then lets the program download a file. And how do I connect to it?
Simplest thing possible would be to keep the path to the files secret and authenticating people by giving them the link.
You might find this page on HTTP Basic authentication useful. You can either roll your own HTTP server or configure an existing httpd. Then, you can write a simple shell script that calls out to curl to authenticate and download the page.
If your users can have accounts on the server, a way would be to use the scp command. They will be prompted for their password. You can wrap it in a shell script or call it from a C program using system or equivalent.
Edit: Then you may look into protecting a directory by a .htaccess and a .htpsswd. I don't know it is accessible through libcurl or any other C library though.

Building a centralized configuration repository

I'm trying to develop an open source application to be sort like a centralized configuration management for all Unix platform like for example (changing root password, SSH configuration, DNS settings, /etc/hosts management.... and others).
I need your feedback for what do you recommend to use as the interface for all the configuration (list of scripts will be running in the Unix Servers as a clients to read the configuration and apply it in each system "Client===>to===>Server mode"
Should I use LDAP to host the configurations and any Unix OS can talk to the LDAP to get the configuration
or Should I just save the configuration in Database (e.g. MySQL) and build a web interface to read the database and print the configuration to the client ?
or you have any other idea?
You might look into something like Chef or Puppet instead. Why re-invent the wheel?
Curl can download a file from a URL and write that file to standard output. For example, executing curl -sS http://someHost/file.cfg will download "file.cfg" from the specified web server. The "-sS" options instruct Curl to print error messages but not any any progress diagnostics. By the way, Curl supports many protocols including HTTP, FTP and LDAP, so you have flexibility in the technology you want to use to host your centralised configuration repository (CCR).
You could use curl to retrieve a configuration file from the CCR, store the result in a local file and then parse that local file.
Check out Blueprint from DevStructure. It sounds like something along the lines of what you're trying to do. Basically it reverse engineers servers and detects everything that has changed from the install state. Open-source too.
https://github.com/devstructure/blueprint (Blueprint # Github)
We are also about to launch ConfigChief which is a central configuration repository that would do what you want: central point to store configuration (with all features like versioning, audit, ACL, inheritence, etc).
Once you have that, combined with change notification, you can just run a curl as Ciaran McHale says against the CCR and get your parsed configuration file back. This would eliminate the need for writing scripts to generate config files from the outside.
If you are interested, you can signup for a beta at http://woot.configchief.com
DISCLAIMER: I guess it is obvious from the first word!

Getting proxy information on Linux programmatically

I am currently using libproxy to get the proxy information (if any) on RedHat and Debian Linux. It doesn't work all that well, but it's the only way I know I can use to get the proxy information from my code.
I need to stop using the lib since in most cases it doesn't recognize the proxy.
Is there any way to acquire the proxy information? What i mean is, is there a file (or group of files) i can read, or an env variable or an API or system call that i can use to get the information?
Gnome based code is OK, KDE might help as well but i am looking for something more generic.
The code is C.
Now, before anyone asks, I don't want to use libproxy anymore. Period. I don't want to start investigating why it doesn't work. I don't really want to know whether there is a new version of that lib. I know it might work, I just don't want to use it. i can't use it (just because). So please don't point me that way.
Code is appreciated.
thanks.
In linux, the "global proxy setting" is typically just environment variables that are usually set in /etc/profile. You can examine those variables to see what proxy is set.
The variables are:
http_proxy - the proxy for HTTP connections
ftp_proxy - the proxy for FTP connections
Using the Network Proxy Preferences tool under Gnome saves information in the GConf database. The path to the keys are /system/http_proxy and /system/proxy. You can read about the detail in those trees at this page.
You can access the GConf database using the library API. Note that GConf is based on GObject. To examine the contents of this tree using the command line, try the following:
gconftool-2 -R /system/http_proxy
This will provide a "name = value" listing of the tree, which may be usable in your application. Note that this requires a system() call, so it's not recommended for a deployed application, but it might help you get started.
GNOME has its own place to store the Proxy settings, and I am sure KDE or any other DE has its own place too. May be you can look for any mention of where Proxy settings should be store in the Linux Standard Base. That could hint you a standard of doing it irrespective of Distro or DE.
DE -> Desktop Environment
char* proxy = getenv("all_proxy");
This statement puts the value of the environment variable called all_proxy, which is used by the system as a global proxy, in your C variable.
To print it in bash, try env | grep 'all_proxy' | cut -d= -f 2.

Resources