How do I input a password from a makefile or system( ) call? - c

I'm working on a C project that makes connections to remote servers. Commonly, this involves using some small terminal macros I've added to my makefile to scp an executable to that remote server. While convenient, the only part of this I've not been able to readily streamline is the part where I need to enter the password.
Additionally, in my code, I'm already using system() calls to accomplish some minor terminal commands (like sort). I'd ALSO like to be able to enter a password if necessary here. For instance, if I wanted to build a string in my code to scp a local file to my remote server, it'd be really nice to have my code pull (and use) a password from somewhere so it can actually access that server.
Does anyone a little more experienced with Make know a way to build passwords into a makefile and/or a system() call in C? Bonus points if I can do it without any third-party software/libraries. I'm trying to keep this as self-contained as possible.
Edit: In reading responses, it's looking like the best strategy is to establish a preexisting ssh key relationship with the server to avoid the login process via something more secure. More work up front for less work in the future, by the sound of it, with additional security.
Thanks for the suggestions, all.

The solution is to not use a password. SSH, and thus SCP, has, among many many others, public key authentication, which is described all over the internet. Use that.
Generally, the problem you're trying to solve is called secret management, and the takeaway is that your authentication tokens (passwords, public keys, API keys…) should not be owned by your application software, but by something instructing the authenticating layer. In other words, the way forward really is that you enable SSH to connect on its own without you entering a password by choosing something that happens to not be an interactive authentication method. So, using a password here is less elegant than just using the generally favorable method of using a public key to authenticate with your server.
Passing passwords as command line option is generally a bad idea – that leaks these passwords into things like process listings, potentially log entries and so on. Don't do it.

Running ssh-keygen to create the keys. Then, adding/appending the local system's (e.g) .ssh/id_rsa.pub file to the remote's .ssh/authorized_keys file is the best way to go.
But, I had remote systems to access without passwords but the file was not installed on the remote (needing ssh-keygen to be run on the remote). Or, the remote .ssh/authorized_keys files did not have the public key from my local system in it.
I wanted a one-time automated/unattended script to add it. A chicken-and-the-egg problem.
I found sshpass
It will work like ssh and provide the password (similar to what expect does).
I installed it once on the local system.
Using this, the script would:
run ssh-keygen on the remote [if necessary]
Append the local .ssh/id_rsa.pub public key file to the remote's .ssh/authorized_keys
Copy back the remote's .ssh/id_rsa.pub file to the local system's .ssh/authorized_keys file [if desired]
Then, ssh etc. worked without any passwords.
UPDATE:
ssh_copy_id is your fried, too.
I had forgotten about that. But, when I was doing this, I had more complex requirements.
The aforementioned script would merge/combine all the public keys and update all the authorized_keys files on all the systems. This would be repeated anytime any new system was added to the mix.
you never need to run ssh-keygen on a remote host, especially not to generate an authorized_keys file. –
Marcus Müller
I think that was inferred but not implied as a requirement [particularly in context]. I hope the answer wasn't -1 for that.
Note that (1) ssh-keygen is needed for (3) copy back the public key.
Ironically, one of the tutorial pages for ssh-copy-id says run ssh-keygen first ...
It's been my exerience when setting up certain types of systems/clusters (e.g. a development host/PC and several remote/target/test ones), if one wants to do local-to-remote actions, invariably one wants to do:
remote-to-local actions -- (e.g.) I'm ssh'ed into a remote system and want to do rcp back to the development system.
The remote system needs to do a git clone/pull from [and, sometimes, git push to] the local git server.
remote-to-remote -- copying/streaming data between target systems.
This requires that each system have a private/public key pair and all systems have an authorized_keys file that has the public keys of all the other systems.
When I've not set up the systems that way it usually comes back to haunt me [usually late at night when I'm tired]. So, I just [axiomatically] set it up that way at the outset.
One of the reasons that I developed the script in the first place. Also, since we didn't want to have to maintain a fork of a given system/distro installer for production systems, we would:
Use the stock/standard distro installer CD/USB
Use the script to add the extra/custom config, S/W, drivers, etc.

Related

Protecting PHP CLI scripts

I'm currently writing a little commercial PHP Script which would be a VPN (PPTP) manager, in command line.
Actually, it's a socket server which is waiting for commands like "create", "suspend", "unsuspend", "changepassword"... Then it parses the PPTP files and modify them.
The thing is that I will have to give the PHP files which are so simple that they ONLY need php5-cli to be installed (and no apache, nothing else), I need to protect it from being read (actually, it's only 1 file, which is an entire class. The rest can be clear).
I want the system to be lightest as possible, that's why there is no need of GUI, web-server, curl, *sql...
I thought about IonCube, but it's very expensive and can't be used on with cli scripts because it needs a loader, which is loaded by apache. This is the problem of every encoder I think.
I thought about HipHop PHP (From Facebook), but it's hard to understand how to use (because I can compile my sources, but the user guide says how to launch our clear source with it :/ ).
So, I'm here to get help about that. I have some PHP-cli scripts, which must run in command line, which don't need a webserver to work, and I only need (as it's a commercial product) to protect my sources from reading and illegal ditribution (it will be easy to bypass the licence system). This file is simply a PHP class.
Thanks.
-- Edit --
Precisly, I want to make it paid by month, 6 months, year. If it's clear, then everybody will be able to comment the licence check, and have it for free. I love the opensource, for proof, I've written 3 classes for this project, a debug/warning/error manager with output handling (stdout/stderr/logfile) and a Socket class, which you just have to include and extends from, and you have a complete server (and you just have to implement needed functions, the server will call the "received commmand"(), and I don't want to obfuscate these 2 classes.
As to ionCube, there is an online encoder available that does a one-time encode of your script for just a few bucks, depending on the size of your codebase. If you write your own licensing mechanism, you could be able to use that. Besides, your statement about the ionclube loader is incorrect, no apache necessary, it's just a module that can be loaded in php.ini. IonCube is - in my opinion - a good choice.
Do take your time to really ask how much protection you need. A computer will always understand how to interpret your code, so eventually a human being will be able to peek inside, if he really wants to.
If the ionCube loader isn't an option on your clients, there are several 'obfuscators' for PHP out there that will probably stop the "quick peekers" from understanding the code in less than one hour. These obfuscaters won't encrypt your code, but they will make it less readable by changing all your variables, functions and class names into some arbitrary hashes, and remove all your comments and whitespace. They don't need anything on the server to be run, but in the end your PHP code will still be just the same.

Prevent unauthorised write access to a part of filesystem or partition

Hello all I have some very important system files which I want to protect from accidental deletion even by root user. I can create a new partition for that and mount it with readonly access but the problem is that I want my application which handles those system files to have write access to that part and be able to modify them. Is that possible using VFS? As VFS handles access to the files I could have a module inserted in the VFS layer which can see if there is a write access to that part then see the authorization and allow it or otherwise reject it.
If not please provide me suggestions regarding how can such a system be implemented what would I need in that case.
If there exists a system like this please suggest about them also.
I am using linux and want to implement this in C, I think it would be possible in C only.
Edit: There are such kind of programs implemented in windows which can restrict access to administrator even, to some important folders, would that be possible in linux?
My application is a system backup and restore program which needs to keep its backup information safe and secure. So I would like to have a secured part of a partition which could not be accidently deleted in any way. There are methods of locking a flashdrive can we use some of those methods for locking a partition in linux also ? so that mount is password protected ? I am not writing a virus application, my application would give user option to delete the backups but I don't wanna allow them to be deleted by any other application.
Edit: I am writing a system restore and backup program for ubuntu, I am a computer engineering student.
Edit: As I have got opinion from Basile Starynkevitch that I would be committing worst sin of programming if I do anything like this, but you could provide me suggestions considering this as a experimental project, I could make some changes in the VFS layer so that this could work.
You could use chattr, e.g.
chattr +i yourfile
But I don't think it is a good thing to do that. People using root access are expected to be careful. Those having root access can still issue the command undoing the above.
There is no way to forbid people having root access, or people having physical access to the computer, to access, remove, change your file, if they really want to (they could update & hack the kernel, for instance). Read more about trusted compute base
And I believe it is even unethical (and perhaps illegal, in some countries) to want to do that. I own my PC, and I don't understand why you should disallow me to change some data on it, because I happened to install some software.
By definition of root on Linux, it can do anything... You won't be able to prohibit him to erase or alter data... People with root access can write arbitrary bytes at arbitrary places on the disk.
And on a machine that I own (or perhaps just have physical access to), I will, thanks God, always be able to remove a file (even under Windows: I could for example boot a Linux CDROM and remove the file from Linux accessing an NTFS, and then reboot the Windows...).
So I think you should not bother and take even a minute to find out how to make root altering your precious files more difficult. Leave them as other root files...
PHILOSOPHICAL RANT
The unix philosophy has always been to trust the system administrator (while protecting newbie users from mistakes), that is the root user. The root is able to do anything (this is why people avoid being root, even on a personal machine). There have never been strong features to prohibit root doing mistakes, because the system administrator is expected to know well the system, and is trusted.
And Unix sysadmins understand this fact: it is part of their culture. (This is probably in contrast with Windows administration culture). They know when to be careful, they don't expect software to prevent mistakes as root.
In order to use root squashing (which makes it so that root can't even see files for a local user) you can set up a local nfs. This forum page explains how to mount an nfs locally. The command is:
mount -t nfs nameofcomputer:/directory_on_that_machine /directory_you_should_have_already_created
nfs has root squashing enabled by default, which should solve your problem. From there, you just make sure your program stores its files on the nfs mount.
Sounds to me like you're trying to write a virus.
No doubt you will disagree.
But I'm willing to bet the poor people that install your software will feel like it's a virus, because it will be behaving like one by making itself hard to remove.
Simply setting r/w flags should suffice for anything else.

Free server side anti virus / security / trojan protection for file uploads?

I am allowing users to upload photos like photo albums, and also attach files (documents for now) as mail attachments. So i assume I need some anti virus/security tool in place to scan the files first in case people upload infected stuff. So two questions:
1) Are there any 'free' or open source tools for this I can use or integrate into my environment: codeignitor php?
2) How to secure the upload area from rest of the system? Say the virus scanner fails to catch a virus and it is uploaded, how to prevent it from infecting other files? Like can the upload area be sandboxed in or something always and use that filepath for users to access the content so it does not spread to other parts of the system?
There is clamav for a free virus scanner. Install it and you could do something like:
function virus_detected($filename)
{
$clamscan = "/usr/local/bin/clamscan";
$result = exec("$clamscan -i --no-summary $filename");
return strlen($result)?true:false;
}
As for security, make sure the temporary files are uploaded to a directory outside of your web root. You should then verify the file type, rename the file to something other than it's original file name and append the appropriate extension (gif,jpg,bmp,png). I believe this should keep you fairly safe aside from exploits in php itself.
For more information about verifying file types in php check out:
http://www.php.net/manual/en/function.finfo-file.php
I know this topic hasn't been active for three years now, but, in case anyone else in the future, similarly, is looking for a PHP-based anti-virus solution, for those without an anti-virus daemon, program or utility installed on their host machine and without the ability to install an anti-virus daemon, program or utility, phpMussel, a PHP script that I've written based on ClamAV that fits the bill for what Rohit (the the original poster) was looking for (a PHP-based anti-virus to protect their CMS against malicious file uploads), may possibly be a viable solution. It certainly isn't perfect and I can't guarantee that it'll catch everything, but by far, it's certainly better than using nothing at all.
Ideally, as per already suggested above by Matt, making a call to shell to have ClamScan scan the file uploads is definitely an ideal solution, and if this is something that a hostmaster, webmaster or anyone in Rohit's situation is able to do, I'd second that suggestion wholly. What I've written, because it is a PHP script, has limitations inherent to anything that relies wholly on PHP in order to function, but, in instances where the aforementioned suggestion and/or similar suggestions aren't a possibility (such as if the host machine doesn't have an anti-virus installed and shell access is disabled; common with cheaper shared hosting solutions), that's where what I'm suggesting here could potentially step in - Something that only requires PHP to be installed (with PCRE extension included, which is standard with PHP nowadays anyhow), and nothing more.
Also remember, as Matt has already suggested, to always upload outside of your root directory, to ensure that uploaded files can't be exploited by attackers (such as in the event of an attacker attempting to compromise your system by uploading backdoors or trojans) - Viruses are not the only threat you need to worry about, and the vast majority of anti-virus solutions nowadays do not solely focus on viruses. Matt is also entirely correct in pointing out that no anti-virus solution is perfect, and for that reason, anyone allowing file uploads to their website or server needs to remain vigilant - An anti-virus solution is a must-have for anyone in that situation, but no holy grail of internet security that'll cover every possible threat exists. Also, renaming files isn't only about ensuring that they can't execute (as may be somewhat inferred by the original poster's reply comment regarding EXEs) - The risk of threats such as directory traversal attacks can be reduced by renaming files as well as the risk associated with an attacker attempting to override an already existing file on a targeted system as a means to hide their dirty-work.
Regarding the threat of files that may be malicious being missed by an anti-virus solution and then potentially infecting the system where they are being uploaded to; What a hostmaster or webmaster could potentially do in this situation is employ some sort of quick and simple encoding process that'd render the file non-executable by the system itself, but which can be easily and readily reversed by the PHP script responsible for calling that file on request, such as by way of using base64_encode(), bin2hex(), or even by just rotating a few characters and adding a salt to displace the file's magic number or something similar.

Two way sync with rsync

I have a folder a/ and a remote folder A/.
I now run something like this on a Makefile:
get-music:
rsync -avzru server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server.
This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything.
In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync.
If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens).
Thing is I have 3 machines in context:
desktop
notebook
home-server
So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server.
I guess this is only possible with a database and track of operations :P
Any simpler solutions?
Thank you.
Try Unison: http://www.cis.upenn.edu/~bcpierce/unison/
Syntax:
unison dirA/ dirB/
Unison asks what to do when files are different, but you can automate the process by using the following which accepts default (nonconflicting) options:
unison -auto dirA/ dirB/
unison -batch dirA/ dirB/ asks no questions at all, and writes to output how many files were ignored (because they conflicted).
Note: I am no longer using Unison (I use NextCloud, which doesn't address the original use case). However, note that rsync is not designed for bidirectional sync, while unison is. unison may have its bugs (as any other piece of software) and its wrinkles. I am surprised it seems to be actively maintained now (last time I looked I think I thought it looked dead), but I'm not sure what's the state nowadays. I haven't had the need to have a two-way file synchronizer, so there may be better options, though.
Since the original question also involves a desktop and laptop and example involving music files (hence he's probably using a GUI), I'd also mention one of the best bi-directional, multi-platform, free and open source programs to date: FreeFileSync.
It's GUI based, very fast and intuitive, comes with filtering and many other options, including the ability to remote connect, to view and interactively manage "collisions" (in example, files with similar timestamps) and to switch between bidirectional transfer, mirroring and so on.
FreeFileSync can easily sync two computers on the same network and also sync two computers on different and remote networks.
On same network: have FreeFileSync use the local file system on one side and a shared network drive / path on the other. On Windows systems you enable file / disk sharing on one computer and access that share from the other. I use FreeFileSync this way to keep my main development PC source code synced with my 2 laptops.
I have also synced one of these laptops with a Linux server with Samba installed and sharing one of its directories.
Across networks: create a VPN and do the same as above. FreeFileSync will see the remote disk as it was on the local network. Or buy one router that allows you to connect a USB disk to it and share over the internet. I have installed a VPN on a remote Linux server and used it through the OpenVPN Windows client.
You could also try bitpocket: https://github.com/sickill/bitpocket
Try this,
get-music:
rsync -avzru --delete-excluded server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru --delete-excluded /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
I just test this and it worked for me. I'm doing a 2-way sync between Windows7 (using cygwin with the rsync package installed) and FreeNAS fileserver (FreeNAS runs on FreeBSD with rsync package pre-installed).
You might use Osync: http://www.netpower.fr/osync , which is rsync based with intelligent deletion propagation. it has also multiple options like resuming a halted execution, soft deletion, and time control.
You could try csync, it is the sync engine under the hood of owncloud.
I'm surprised no one has mentioned Syncthing yet. I have been using it for years to synchronize my phone, my tablet and my two laptops. One time I also used it to send 10 GB of photos to my family ~600 km away, straight from my machine to their machine, and it was incredibly fast (despite the data getting routed through Syncthing's discovery server to work around NAT issues). I also tried OwnCloud/NextCloud at some point but Syncthing has been much more reliable and, also, much faster.
I'm now using SparkleShare https://www.sparkleshare.org/
works on mac, linux and windows.
I'm not sure whether it works with two syncing but for the --delete to work you also need to add the --recursive parameter as well.
Rclone is what you are looking for. Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers including local filesystems. Rclone was previously known as Swiftsync and has been available since 2013.

Getting proxy information on Linux programmatically

I am currently using libproxy to get the proxy information (if any) on RedHat and Debian Linux. It doesn't work all that well, but it's the only way I know I can use to get the proxy information from my code.
I need to stop using the lib since in most cases it doesn't recognize the proxy.
Is there any way to acquire the proxy information? What i mean is, is there a file (or group of files) i can read, or an env variable or an API or system call that i can use to get the information?
Gnome based code is OK, KDE might help as well but i am looking for something more generic.
The code is C.
Now, before anyone asks, I don't want to use libproxy anymore. Period. I don't want to start investigating why it doesn't work. I don't really want to know whether there is a new version of that lib. I know it might work, I just don't want to use it. i can't use it (just because). So please don't point me that way.
Code is appreciated.
thanks.
In linux, the "global proxy setting" is typically just environment variables that are usually set in /etc/profile. You can examine those variables to see what proxy is set.
The variables are:
http_proxy - the proxy for HTTP connections
ftp_proxy - the proxy for FTP connections
Using the Network Proxy Preferences tool under Gnome saves information in the GConf database. The path to the keys are /system/http_proxy and /system/proxy. You can read about the detail in those trees at this page.
You can access the GConf database using the library API. Note that GConf is based on GObject. To examine the contents of this tree using the command line, try the following:
gconftool-2 -R /system/http_proxy
This will provide a "name = value" listing of the tree, which may be usable in your application. Note that this requires a system() call, so it's not recommended for a deployed application, but it might help you get started.
GNOME has its own place to store the Proxy settings, and I am sure KDE or any other DE has its own place too. May be you can look for any mention of where Proxy settings should be store in the Linux Standard Base. That could hint you a standard of doing it irrespective of Distro or DE.
DE -> Desktop Environment
char* proxy = getenv("all_proxy");
This statement puts the value of the environment variable called all_proxy, which is used by the system as a global proxy, in your C variable.
To print it in bash, try env | grep 'all_proxy' | cut -d= -f 2.

Resources