Communicating between shell script and C - c

I've a shell script file which randomly generates a location and copy some files to this randomly generated location.
I also have a different C code that needs to access this randomly generated location to access the copied files.
However, both shell script and C code work independently (in order of shell script and C code). The C code is called by a third application, so it is impossible to pass the location data to C.
How can I securely save this "randomly generated location" data somewhere that C code can access.
I am running these scripts on Mac and would prefer a solution that helps keep these data into memory or does not make file at a common location (like /tmp, /var/tmp etc)

There are various ways to share the information. Personally I don't find saving to a file to be a problem, since you can use the filesystem's access control to limit access, and/or encrypt the file.
However, specifically on macOS there are some other ways, such as User Defaults (accessible from command-line with defaults), and Keychain (accessible from command-line with security).
Saving to user defaults is effectively saving to a file (accessible by that user), so for security (other than through obscurity) you would still need to encrypt the data. Meanwhile Keychain is built for storing things securely, but setting up access to it is more difficult (and you may inadvertently grant your shell interpreter permanent access).
Still, it may be worthwhile to try something like:
security add-generic-password -a myUserName -s myService -w '/foo/bar/baz'
security find-generic-password -g -a myUserName -s myService

Related

How do I input a password from a makefile or system( ) call?

I'm working on a C project that makes connections to remote servers. Commonly, this involves using some small terminal macros I've added to my makefile to scp an executable to that remote server. While convenient, the only part of this I've not been able to readily streamline is the part where I need to enter the password.
Additionally, in my code, I'm already using system() calls to accomplish some minor terminal commands (like sort). I'd ALSO like to be able to enter a password if necessary here. For instance, if I wanted to build a string in my code to scp a local file to my remote server, it'd be really nice to have my code pull (and use) a password from somewhere so it can actually access that server.
Does anyone a little more experienced with Make know a way to build passwords into a makefile and/or a system() call in C? Bonus points if I can do it without any third-party software/libraries. I'm trying to keep this as self-contained as possible.
Edit: In reading responses, it's looking like the best strategy is to establish a preexisting ssh key relationship with the server to avoid the login process via something more secure. More work up front for less work in the future, by the sound of it, with additional security.
Thanks for the suggestions, all.
The solution is to not use a password. SSH, and thus SCP, has, among many many others, public key authentication, which is described all over the internet. Use that.
Generally, the problem you're trying to solve is called secret management, and the takeaway is that your authentication tokens (passwords, public keys, API keys…) should not be owned by your application software, but by something instructing the authenticating layer. In other words, the way forward really is that you enable SSH to connect on its own without you entering a password by choosing something that happens to not be an interactive authentication method. So, using a password here is less elegant than just using the generally favorable method of using a public key to authenticate with your server.
Passing passwords as command line option is generally a bad idea – that leaks these passwords into things like process listings, potentially log entries and so on. Don't do it.
Running ssh-keygen to create the keys. Then, adding/appending the local system's (e.g) .ssh/id_rsa.pub file to the remote's .ssh/authorized_keys file is the best way to go.
But, I had remote systems to access without passwords but the file was not installed on the remote (needing ssh-keygen to be run on the remote). Or, the remote .ssh/authorized_keys files did not have the public key from my local system in it.
I wanted a one-time automated/unattended script to add it. A chicken-and-the-egg problem.
I found sshpass
It will work like ssh and provide the password (similar to what expect does).
I installed it once on the local system.
Using this, the script would:
run ssh-keygen on the remote [if necessary]
Append the local .ssh/id_rsa.pub public key file to the remote's .ssh/authorized_keys
Copy back the remote's .ssh/id_rsa.pub file to the local system's .ssh/authorized_keys file [if desired]
Then, ssh etc. worked without any passwords.
UPDATE:
ssh_copy_id is your fried, too.
I had forgotten about that. But, when I was doing this, I had more complex requirements.
The aforementioned script would merge/combine all the public keys and update all the authorized_keys files on all the systems. This would be repeated anytime any new system was added to the mix.
you never need to run ssh-keygen on a remote host, especially not to generate an authorized_keys file. –
Marcus Müller
I think that was inferred but not implied as a requirement [particularly in context]. I hope the answer wasn't -1 for that.
Note that (1) ssh-keygen is needed for (3) copy back the public key.
Ironically, one of the tutorial pages for ssh-copy-id says run ssh-keygen first ...
It's been my exerience when setting up certain types of systems/clusters (e.g. a development host/PC and several remote/target/test ones), if one wants to do local-to-remote actions, invariably one wants to do:
remote-to-local actions -- (e.g.) I'm ssh'ed into a remote system and want to do rcp back to the development system.
The remote system needs to do a git clone/pull from [and, sometimes, git push to] the local git server.
remote-to-remote -- copying/streaming data between target systems.
This requires that each system have a private/public key pair and all systems have an authorized_keys file that has the public keys of all the other systems.
When I've not set up the systems that way it usually comes back to haunt me [usually late at night when I'm tired]. So, I just [axiomatically] set it up that way at the outset.
One of the reasons that I developed the script in the first place. Also, since we didn't want to have to maintain a fork of a given system/distro installer for production systems, we would:
Use the stock/standard distro installer CD/USB
Use the script to add the extra/custom config, S/W, drivers, etc.

How can I get access to files transferred from Windows to WSL-2 Ubuntu?

I have a Linux subsystem installed on my Windows machine. I've transferred a tar.gz file I want to access by finding the location of my subsystem and dragging the files over. But when I run the command:
tar -zxvf file_name.tar.gz
I get the error:
tar (child): vmd-1.9.4a51.bin.LINUXAMD64-CUDA102-OptiX650-OSPRay185.opengl.tar.gz: Cannot open: Permission denied
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I assume permission being denied is to do with having transferred from Windows since I couldn't access directories I created through Windows either. So, is there something I need to change to gain access to these files?
(PS. I know there are other way of getting tar.gz files other than transferring from Windows, but I'll need to do this for other folders too, I only included the filetype in case it was relevant .)
EDIT: You shouldn't attempt to drag files over. See answer below.
For starters, this belongs on Super User since it doesn't deal directly with a programming question. But since you've already provide an answer here that may be slightly dangerous (and even in your question), I didn't want to leave this unanswered for other people to find inadvertently.
If you used the first method in that link, you are using a WSL1 instance, not WSL2. Only WSL1 made the filesystem available in that way. And it's a really, really bad idea:
There is one hard-and-fast rule when it comes to WSL on Windows:
DO NOT, under ANY circumstances, access, create, and/or modify Linux files inside of your %LOCALAPPDATA% folder using Windows apps, tools, scripts, consoles, etc.
Opening files using some Windows tools may read-lock the opened files and/or folders, preventing updates to file contents and/or metadata, essentially resulting in corrupted files/folders.
I'm guessing you probably went through the install process for WSL2, but you installed your distribution before setting wsl --set-default-version 2 or something like that.
As you can see in the Microsoft link above, there's now a safe method for transferring and editing files between Windows and WSL - the \\wsl$\ tmpfs mounts. Note that as a tmpfs mount stored in memory, it's really more for transferring files over. They will disappear when you reboot or shutdown WSL.
But even if you'd used the second method in that article (/mnt/c), you probably would have run into permissions issues. If you do, the solution should be to remount the C: drive with your uid/gid as I describe here.

Restricting folder/file access to one program?

What I need, boiled down, is a way to 'selectively' encrypt either a folder or a zip file - Whatever the solution would be, it needs to block (or redirect) all reads/writes EXCEPT from one specific program (not mine, a legacy application that I do not have source code access to - I cannot modify the program who would have the sole permission to perform reads and writes on the encrypted folder/zip file). I would like to avoid having a constantly running background app (as all the end-user would have to do to circumvent the protection would be to kill the program)
The purpose is to, of course, protect the files within the folder from tampering.
I could modify folder permissions at install, but this would block all programs from access wouldn't it? I more or less need to only block File Explorer from accessing the files, but not the program which needs to read them... if that makes sense. Or, if I could protect the (plaintext) files somehow without affecting the legacy application's reading of them... argh.
I wonder if it would be possible with CreateProcess() to run the legacy application as a high-level user and give the folders it needs access to the same permission, such as TrustedInstaller or SYSTEM, (who, in Windows, own things that not even administrators can touch, like System Volume Information)
This would allow the program to read/write to the folders, but not the user.
I was looking at LockFile, seems to be close to what I am looking for but not quite. I need something like semi-exclusive access.
I am fairly fluent in C++, Visual Basic.net, only some Python, but I am willing to use any language which would allow a solution to this problem (Though it probably could be implemented in any language, if possible at all.)

Simple way to transfer data files out of a remote virtual machine

Currently I am using Git, through the command line, to transfer data files (.csv) from my google cloud VM instance (running linux) to my local machine. However, there is limit of 25MB per file on Github. The files will be 1 GB max.
Are there other straightforward methods to do this? Maybe I can add a couple lines to the code and push the csv to a database. I have not come across a simple way to do so yet.
Are there other straightforward methods to do this?
Yes, for linux you have many options butscp might be most straightforward.
If you can ssh to instance directly, say ssh user#host or (with key) ssh -i key user#host then you can secure copy as well with much the similar commands:
scp -i key user#host:source_path/remote_file . to copy remote file source_path/remote_file to current folder or viceversa
scp -i key local_file user#host:destination_path to copy some local_file from current local folder to remote destination_path
Keep in mind that user has to have proper privileges to access remote path/file in both cases. Archiving file beforehand can help as well especially with .csv files (tar cvzf my_archive.tar.gz my_csv_file.csv for example).
Note: If you suffer from bad network connection that break during such a large transfer or have bunch of files that are not changed but still are part of copy procedure then rsync might be better option, and there are certainly much more options depending on actual requirements.

Can we change permissions from user to root?

I have a written a C program that creates a file "abcd.txt" and write some data into it. I was executing my code by logging with a username"bobby" and so the file abcd.txt was created with owner as bobby.
But my task is, even though I execute my code with some username "bobby", the file should always be created with owner as root. Can someone help me by saying how this could possible?
As a general principle you need your effective uid (euid to be root) either when you are are writing the file or when you perform a chown(2) on the file.
If you are doing this under Linux then there are linux specific methods that you can use.
Generic Solution
Without availability of sudo
This is the old UNIX DAC approach, it's fraught with peril. It assumes that you do not have something like sudo installed or cannot install it.
Your executable should be owned by root and have the executables setuid bit set.
Process
You should use seteuid () to drop your privileges from root to bobby for most of the operation, including writing. When you are done, bring your privilege level back up to root using seteuid(0) and perform a chown() (or fchown on the fd) on the file to change its ownership to root.
some basic safety
For safety set it up so that your executable is owned by root:safegrp where 'safegrp' is name of a group unique to users who are allowed to execute this file (add bobby to safegrp) ; and ensure that the setuid executable's mode is 4510 ;
With availability of sudo
If sudo is available on your system then follow the same process as above for dealing with privileges within the executable but DO NOT set the file mode to setuid, have safegrp added to sudoers for this executable and now bobby can run it with sudo /your/bin/prog
Linux specific solution
POSIX.1e
It is possible to have tighter control over the file use POSIX.1e capabilities support. In your case you wish to grant SYS_CHOWN to your program;
For security reasons, I would probably set that up as a COMPLETELY separate binary or a sub process and still use sudo and perform appropriate dropping of privileges.
linuxacl[ACL Using Access Control Lists on Linux] has excellent tutorial on this topic
SE-Linux
You can use Mandatory Access Control to limit the access to such a dangerous binary but SE linux is a pain to configure :^) although a possibly a good approach
You probably don't want to run your program as root, unless you really have to. Perhaps run "chown" from a shell script after running your program? Or, you can use chown(2) from a program running as root (or with equivalent capabilities, on linux).
Use the chown() method. There are probably more authoritative links, but this one is nice since it includes the calls to getpwnam(). I've done all of this in the past, but unfortunately I don't still have the code (it's owned by IBM).
http://manpages.courier-mta.org/htmlman2/chown.2.html

Resources