Do not understand why my code works if I take out my loop and variables while manually executing each line. First I thought my variables were wrong, but then I tested my code with the variables but no loop and it worked.
If I put back in the loop (the only thing I'm changing), I get these weird stty errors.
while read p; do
#Send file
scp random_file.txt $p:/me/folder"
#Log in
ssh $p"#myserver.txt"
#List file, extract file, append file
#Code here
#log out
exit
done <usernames.txt
I've googled this error (which is a pretty common error) ad nauseam, but none of the solutions are working. Disabling pseudo-tty allocation nor forcing pseudo-tty allocation work for me. I always get an error, no matter the option
-t -t option
tcgetattr: Inappropriate ioctl for device
-t option
Pseudo-terminal will not be allocated because stdin is not a terminal.
stty: : Invalid argument
-T option
stty: : Invalid argument
So how do I get around these stty errors and why does it stop working when I put it in a loop?
The input redirection with <usernames.txt is replacing the standard input with the file usernames.txt. Hence the terminal is no longer the input, causing these errors. One way around this is to use a file descriptor other than standard input, e.g.:
while read p <&3; do
…
done 3<usernames.txt
Another problem you have is that the commands within the loop are executed locally, not over ssh on the remote machine, so the exit will exit your local shell (after you return from ssh by manually logging out). You can put commands to execute remotely on the ssh command line (see ssh manual, or, e.g., this question), which may eliminate your need to have the terminal as standard input in the first place.
ssh while interactive drops you into a remote shell. ssh while in a script does not do that. The body of your loop after the ssh line is not happening on the remote system when scripted this way. It is happening locally.
If you want to run code on the remote machine in the context of that ssh connection then you need to write it all as the command argument to the ssh command and/or write a script on the remote machine and execute that script as the ssh command argument.
Related
When trying to use the r or run commands in lldb I get an error like this: error: shell expansion failed (reason: invalid JSON). consider launching with 'process launch'.
It works when I just use process launch but I really do not feel like doing that.
Is there any way I could make either an alias or make shell expansions not fail?
The way lldb does shell expansion is to run a little tool called lldb-argdumper (it is in Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources on macOS) with the command arguments that you passed. lldb-argdumper wraps the contents of argv as JSON, and writes that to stdout. lldb then parses the JSON back into args and inserts the args one by oneinto the argc/argv array when it launches the process.
Something in the output is not getting properly wrapped. You can probably see what it is by looking at the output of lldb-argdumper with your arguments. Whatever it is, it's a bug, so if you can reproduce it please file with your example with http://bugs.llvm.org.
(lldb) command alias run-no-shell process launch -X 0 --
will produce an alias that doesn't do shell expansion. You can also put this in your ~/.lldbinit.
I ran into this recently. TL;DR: make sure your shell does not echo anything during initialization. Run <your-shell> -c date to confirm; only the date should be printed.
The problem was that my shell's initialization file was echoing some stuff, which was getting prepended to lldb-argdumper's JSON output. (lldb doesn't run lldb-argdumper directly; it invokes your default shell to run lldb-argdumper.)
Specifically, I use fish as my shell, which does not have separate initialization paths for interactive and non-interactive sessions. (See this issue for discussion of whether this is good.) bash and zsh have separate init files for interactive/non-interactive sessions, which makes avoiding this problem slightly easier.
I am developing a piece of software in C that needs to SSH to another machine and run commands as root.
Something that looks like this:
char* GetPasswd(void);
void run(char* apnSshCommand)
{
FILE* lphSshFD = popen(apnSshCommand,"w");
fprintf(lphSshFD,GetPasswd());
fflush(lphSshFD);
fprintf(lphSshFD,"#Command to run in shell");
fflush(lphSshFD);
}
GetPasswd() would be a callback to a gui where the user has typed in the password
I know that the above code is not possible since SSH looks to it's own /dev/tty to provide the password for authentication.
I have read posts such as this that teases an answer using ioctl() and fcntl() but does not provide one. Along with this that shows it is possible from the command line however I have not been able to translate it.
Using expect is NOT an option
Using SSH keys are NOT an option
The SSH C library is NOT an option
Using sshpass is NOT an option
Without these, the only thing that I can think of is starting a new child process and redirect/close file descriptors to control what ssh has access to.
EDIT: These restrictions come from the fact that the system I am working on is extremely old and does not contain tools such as expect, sshpass and the SSH C library as well as being subject to multiple restrictions in regards to when sshkeys can be used
This works for me:
Create a script called pwd.sh that emits the password:
#!/bin/bash
echo -n mypassword
Run ssh in a new session like this:
SSH_ASKPASS=/path/to/pwd.sh DISPLAY= setsid -w ssh root#192.168.1.10 <command>
The effect of setsid is to run ssh detached from the controlling terminal, which is what is needed for it to respect SSH_ASKPASS.
I haven't tried, but I would expect to be able to run this command from C using system().
For the record, I tested this with OpenSSH_7.2p2 on Ubuntu.
I was able to come up with a solution by looking at the source code for sshpass which I found here
In case this is helpful, here's my environment: debian 8, gcc (with std = gnu99).
I am facing the following situation:
In my C program, I get a string (char* via a socket).
This string represents a bash command to execute (like 'ls ls').
This command can be any bash, as it may be complex (pipelines, lists, compound commands, coprocesses, shell function definitions ...).
I can not use system or popen to execute this command, so I use currently execve.
My concern is that I have to "filter" certain command.
For example, for the rm command, I can apply it only on the "/home/test/" directory. All other destinations is prohibited.
So I have to prevent the command "rm -r /" but also "ls ls && rm -r /".
So I have to parse the command line that is given me, to find all the command and apply filters on them.
And that's when I'm begin to be really lost.
The command can be of any complexity, so if I want to make pipelines (execve execute a command at a time) or if I want to find all commands for applying my filters, I'll have to develop parser identical to that of sh.
I do not like creating the wheel again, especially if I make it square.
So I wonder if there is a feature in the C library (or that of gnu) for that.
I have heard of wordexp, but I do not see how manage pipelines, redirection or other (in fact, this does not seem made for this) and i do not see how can I retrieve all the command inside the commande.
I read the man of sh(1) to see if I can use it to "parse" but not execute a command, but so far, I find nothing.
Do I need to code a parser from the beginning?
Thank for your reading, and I apologies for my bad english : it's not my motherlanguage (thanks google translate ...).
Your problem:
I am facing the following situation: In my C program, I get a string
(char* via a socket). This string represents a bash command to execute
(like 'ls ls'). This command can be any bash, as it may be complex
(pipelines, lists, compound commands, coprocesses, shell function
definitions ...).
How do you plan on authenticating who is at the other end of the socket connection?
You need to implement a command parser, with security considerations? Apparently to run commands remotely, as implied by "I get a string (char* via a socket)"?
The real solution:
How to set up SSH without passwords
Your aim
You want to use Linux and OpenSSH to automate your tasks. Therefore
you need an automatic login from host A / user a to Host B / user b.
You don't want to enter any passwords, because you want to call ssh
from a within a shell script.
Seriously.
That's how you solve this problem:
I receive on a socket a string that is a shell command and I have to
execute it. But before execute it, i have to ensure that there is not
a command in conflict with all the rules (like 'rm only inside this
directory, etc etc). For executing the command, I can't use system or
popen (I use execve). The rest is up to me.
Given
And that's when I'm begin to be really lost.
Because what you're being asked to do is implement security features and command parsing. Go look at the amount of code in SSH and bash.
Your OS comes with security features. SSH does authentication.
Don't try to reinvent those. You won't do it well - no one can. Look how long it's taken for bash and SSH to get where they are security-wise. (Hint: it's decades because there's literally decades of history and knowledge that were built into bash and SSH when they were first coded...)
This is the command I'm using:
rsync --partial --timeout=60 --rsh='/usr/bin/ssh -i /root/.ssh/id_rsa' /path/file user#host:/remote_path/
This works when I run it on the command line, but does not work when I use system() in my C program.
Correction: This call will not work after boot up, no matter how long the program runs. If the program is restarted it will work every time no matter how many times the program is run.
status = system("rsync --partial --timeout=60 --rsh='/usr/bin/ssh -i /root/.ssh/id_rsa' /path/file user#host:/remote_path/");
The return value from rsync is 12: Error in rsync protocol data stream.
Turns out that the problem was the environment variables. HOME was set to '/' on start up instead of '/user'. ssh was unable to locate the known_hosts file and therefor the auto-login failed, causing rsync to fail.
I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?
The calling thread should block indefinitely until the task you initiated with system() completes. If what you are observing is that the call returns and the file operation as not completed it is an indication that the spawned operation failed for some reason.
What does the return value indicate?
Almost certainly not a problem with use of system(), but with the operation you're performing. Always check the return value, but even more so, you'll want to see the output of the command you're calling. For non-interactive use, it's often best to write stdout and stderr to log files. One way to do this is to write a wrapper script that checks for the underlying command, logs the commandline, redirects stdout and stderr (and closes stdin if you want to be careful), then execs the commandline. Run this via system() rather than the OS command directly.
My bet is that the failing machines have limited disk space, or are missing either the target file or the actual gzip/gunzip commands.
I'm using unix system() calls to
gunzip and gzip files.
Probably silly question: why not use zlib directly from your application?
And system() isn't a system call. It is a wrapper for fork()/exec()/wait(). Check the system() man page. If it doesn't unblock, it might be that your application interferes somehow with wait() - e.g. do you have a SIGCHLD handler?
If it's a Linux system I would recommend using strace to see what's going on and which syscall blocks.
You can even attach strace to already running processes:
# strace -p $PID
Sounds like I'm running into the same intermittent issue indicating a timeout of some kind. My script runs every day. I'm starting to believe GZIP has a timeout.
gzip -vd filename.txt.gz 2>> tmp/errorcatch.txt 1>> logfile.log
stderr: Error for filename.txt.gz
Moves to next command 'cp filename* new/directory/', resulting in zipped version of filename in new directory
stdout from earlier gzip showing successful unzip of SAME file:
filename.txt.gz: 95.7% -- replaced with filename.txt
Successful out file from gzip is not there in source or new directory.
Following alerts, manual run of 'gzip -vd filename.txt.gz' never fails.
Details:
Only one call in script to unzip that file
Call for unzip is inside a function (for more rebust logging and alerting)
Unable to strace in production
Unable to replicate locally
In occurences over last month, found no consistency among file size, only
I'll simply be working around it with a retry logic and general scripting improvements, but I want the next google-er to know they're not crazy. This is happening to other people!