First of all, the goals are not security nor user-friendliness. (Meaning no visual crap and no password encoding/ mega security stuff)
Server-side I want the simplest thing possible. Just a way to authenticate some ~5 users but knowing who they are when they do. Once they are authenticated I'll serve them a file (I haven't decided yet what, .txt or xml or something) and they won't be able to do anything else.
So from the program standpoint, I need to connect to my server, authenticate somehow, get a file, and disconnect. The user only interacts with the program with a simple user/pass combo. The rest is automatic. I was looking to libcurl for the connection+authentication+download, but I would like to hear suggestions because from this list: libcurl competitors, there seems to be much offer available.
I think of it as the same as when I do sudo aptitude install, but the sudo part would go on the server if that makes any sense.
So my question is, how can I make a page with an authentication (note that it doesn't have to have any visual output) which then lets the program download a file. And how do I connect to it?
Simplest thing possible would be to keep the path to the files secret and authenticating people by giving them the link.
You might find this page on HTTP Basic authentication useful. You can either roll your own HTTP server or configure an existing httpd. Then, you can write a simple shell script that calls out to curl to authenticate and download the page.
If your users can have accounts on the server, a way would be to use the scp command. They will be prompted for their password. You can wrap it in a shell script or call it from a C program using system or equivalent.
Edit: Then you may look into protecting a directory by a .htaccess and a .htpsswd. I don't know it is accessible through libcurl or any other C library though.
Related
I'm working on a C project that makes connections to remote servers. Commonly, this involves using some small terminal macros I've added to my makefile to scp an executable to that remote server. While convenient, the only part of this I've not been able to readily streamline is the part where I need to enter the password.
Additionally, in my code, I'm already using system() calls to accomplish some minor terminal commands (like sort). I'd ALSO like to be able to enter a password if necessary here. For instance, if I wanted to build a string in my code to scp a local file to my remote server, it'd be really nice to have my code pull (and use) a password from somewhere so it can actually access that server.
Does anyone a little more experienced with Make know a way to build passwords into a makefile and/or a system() call in C? Bonus points if I can do it without any third-party software/libraries. I'm trying to keep this as self-contained as possible.
Edit: In reading responses, it's looking like the best strategy is to establish a preexisting ssh key relationship with the server to avoid the login process via something more secure. More work up front for less work in the future, by the sound of it, with additional security.
Thanks for the suggestions, all.
The solution is to not use a password. SSH, and thus SCP, has, among many many others, public key authentication, which is described all over the internet. Use that.
Generally, the problem you're trying to solve is called secret management, and the takeaway is that your authentication tokens (passwords, public keys, API keys…) should not be owned by your application software, but by something instructing the authenticating layer. In other words, the way forward really is that you enable SSH to connect on its own without you entering a password by choosing something that happens to not be an interactive authentication method. So, using a password here is less elegant than just using the generally favorable method of using a public key to authenticate with your server.
Passing passwords as command line option is generally a bad idea – that leaks these passwords into things like process listings, potentially log entries and so on. Don't do it.
Running ssh-keygen to create the keys. Then, adding/appending the local system's (e.g) .ssh/id_rsa.pub file to the remote's .ssh/authorized_keys file is the best way to go.
But, I had remote systems to access without passwords but the file was not installed on the remote (needing ssh-keygen to be run on the remote). Or, the remote .ssh/authorized_keys files did not have the public key from my local system in it.
I wanted a one-time automated/unattended script to add it. A chicken-and-the-egg problem.
I found sshpass
It will work like ssh and provide the password (similar to what expect does).
I installed it once on the local system.
Using this, the script would:
run ssh-keygen on the remote [if necessary]
Append the local .ssh/id_rsa.pub public key file to the remote's .ssh/authorized_keys
Copy back the remote's .ssh/id_rsa.pub file to the local system's .ssh/authorized_keys file [if desired]
Then, ssh etc. worked without any passwords.
UPDATE:
ssh_copy_id is your fried, too.
I had forgotten about that. But, when I was doing this, I had more complex requirements.
The aforementioned script would merge/combine all the public keys and update all the authorized_keys files on all the systems. This would be repeated anytime any new system was added to the mix.
you never need to run ssh-keygen on a remote host, especially not to generate an authorized_keys file. –
Marcus Müller
I think that was inferred but not implied as a requirement [particularly in context]. I hope the answer wasn't -1 for that.
Note that (1) ssh-keygen is needed for (3) copy back the public key.
Ironically, one of the tutorial pages for ssh-copy-id says run ssh-keygen first ...
It's been my exerience when setting up certain types of systems/clusters (e.g. a development host/PC and several remote/target/test ones), if one wants to do local-to-remote actions, invariably one wants to do:
remote-to-local actions -- (e.g.) I'm ssh'ed into a remote system and want to do rcp back to the development system.
The remote system needs to do a git clone/pull from [and, sometimes, git push to] the local git server.
remote-to-remote -- copying/streaming data between target systems.
This requires that each system have a private/public key pair and all systems have an authorized_keys file that has the public keys of all the other systems.
When I've not set up the systems that way it usually comes back to haunt me [usually late at night when I'm tired]. So, I just [axiomatically] set it up that way at the outset.
One of the reasons that I developed the script in the first place. Also, since we didn't want to have to maintain a fork of a given system/distro installer for production systems, we would:
Use the stock/standard distro installer CD/USB
Use the script to add the extra/custom config, S/W, drivers, etc.
I'm trying to test out U2F on Google Appengine.
Unfortunately dev_appserver.py, the development app server for local testing, only runs in HTTP, and the U2F standard requires that the web server be connected over HTTPS.
There are some options for proxy servers, including stunnel, stud, Pound and ngrok.
What I am doing will probably end up being an open source package, so I would like to keep the setup fairly straightforward and dependency list strictly to widely available packages.
An ideal solution would be a command-line program along the lines of prog_name -listen localhost:8041 -proxy localhost:8040; in other words, a very simple command line setup.
The stud and pound programs seem like overkill.
The stunnel option seems to be the best and most common solution, but it would be better if it could be configured from the command line instead of a config file.
Ngrok is super-cool and seems to be along the right lines. It gives you a random server name though, which can be a problem since the U2F appId must match the server (if persistence matters), but other than that it's basically the right idea.
I have a vague memory of this being possible with openssl from the command line, but the only command that seems suitable is s_server and that seems to only provide ssl reflection/debugging information and not the option to proxy requests per se. My memory must be faulty.
It would not be terribly difficult to write-up a trivial Python server/client proxy, leading me to believe there's probably a simple option out there ... however, the search results have a dreadful signal-noise ratio.
Are there other sensible options for developing with a HTTPS server when the content is served over HTTP (as the case is with AppEngine's dev_appserver.py)?
I have created an application to run on an Olinuxino Maxi board which is presently running an Arch Linux ARM distribution. My somewhat simple application can be considered to be in two parts:
A program that performs communication between RS232 and TCP/IP, and initiates / accepts VOIP calls via the Linphone library. How this program behaves is configured through a .conf file. This program starts up on boot. I achieved the start up boot by creating a .service file for it and then enabling it using systemctl / systemd.
A simple web page that is accessed via Lighttpd. The CGI page is written in C. This page provides means for a user to edit the .conf file through a simple form, and therefore configure the operation of the main program.
All of the above now works. The specific problem I have relates to how I can cause my service program to restart (so that it configures itself again from the .conf file) when the user submits new settings via the web page. I'm stuck on this areas because, while I'm a fairly experienced C programmer, doing development on Linux and general Linux administration is fairly new ground to me.
In case it's relevant, I'll discuss a bit about how I've set this up, including how I've set up users and so forth:
I've set up a new user of the name of the application. Call it user application-name.
The RS232/TCP/IP/VOIP program resides in the folder /home/application-name/. The .conf file also resides in here.
systemd starts the program on boot. I understand that the program is being run as root.
The web / CGI code is located in /home/application-name/web/. I've set up an alias in the Lighttpd configuration is that /cgi-bin/ points to here, and that works.
The Lighttpd server, which I understand is run as user 'http', happily accesses the web page and, on submitting of POST data, edits the ../.conf file accordingly. To allow the web server to edit the .conf file I did have to chmod that file to allow write access to others, but I am guessing that a better way to do this would be to put users application-name and http into a new user group (though I'd appreciate advice on this also).
After processing of the POST data, my C CGI program also uses system() to call a bash script, restart_application.sh.
Inside restart_application.sh, I'm making a call to systemctl to restart my main program. But it doesn't work, and I gather it doesn't work because no user except root can invoke systemctl.
So the main question is:
How should I make my program restart?
And also:
If I'm doing any absolute horrors here in terms of my setup and Linux system administration, please by all means shout angrily.
Edit 1: Unless anyone has a better approach, I'm thinking of trying the idea suggested here which is to basically 'sudo' within the bash file.
I have some C code that parses a file and generates another file of processed data. I now need to post these files to a website on a web server. I guess there is a way to do a HTTP POST but I have never done this in c (using GCC on Ubuntu). Does anyone know how to do this? I need a starting point as I have no clue of doing this in C. I also need to be able to authenticate with the website.
libcurl is probably a good place to start.
I think Hank Gay's suggestion of using a library to handle the details is the best one, but if you want to "do it yourself", you need to open a socket to the web server and then send your data in the HTTP POST format which is described here. Authentication can mean a variety of different things, so you need to be more specific.
Unfortunately, all of the above three jobs involve a fair bit of complexity, so you need to break the question down into stages and come back and ask about each bit separately.
I have written a cgi-bin application in C that runs in a browser and allows the user to open an interactive shell and view & edit files on a Linux machine. It runs as the standard apache "www-data" user. I just added a login screen to it where the user types in their name and password (in a form) but I cannot authenticate the user using getspnam since this function only works when running as root.
What options do I have to check the login credentials of a user when not running as root?
PS: In my interactive shell I can type "su root" and then type in my password and it does elevate to root fine so it obviously can be done interactively.
I think you want to take a look at Pluggable authentication modules. AFAIK, PAM handles all the messy stuff for you and you just need to do a few function calls to authenticate the user on whatever the backend to authenticate users on the Linux host is (be it shadow passwords, nis, ldap, whatever)
Here's a short guide about integrating your C code with them.
With regard to your PS: Well, when you do a su root you're switching to the root user. So yes, of course, root can read the shadow file, you all ready said that.
With regard to your problem: Can't you have your apache processes temporarily elevate to root (by calling setuid or similar) to perform the authentication?
Good luck!
As suggested, I think PAM is the modern way to do this. But if you want to go old school, you need to create a setuid-root program (not a script) to do your authentication.
There are lots of gotchas with setuid-root programs, which is why PAM is likely better.
Here's a link to some good papers on safely writing setuid-root programs.