I have a working solution already with a while read IFS processing a csv file, but I'd like to have it all in a single bash script as the input data never changes.
The data is a list of ip addresses and names;
10.0.0.1,server1
10.0.0.2,server2
172.16.0.1,server3
192.168.0.1,server4
The process itself will run a ping/curl/wget as required, all the while echoing out which server and test it is doing.
I can run the IP list on its own in the same file using a list function and reading the items, but then I don't get the server friendly name echoed out. So my question is, how should I approach this? I was thinking create the data array then parse in to a read somehow and split the tokens, but not sure how. Thought about writing the data out to a temp file then reading it in again and deleting the tmp file afterwards, but this seems messy. Any pointers appreciated.
In terms of a working solution (if someone wanted to provide instead of just advising), the output of the above data could just be echoed out like this;
Testing: $server, IP address: $ip, test 1.
Then I will just sub the tests as required.
Thanks
If you want to include your data directly in a script instead of reading it from a separate file, and you're already using a loop to read the existing data, the easiest way is probably just copy and pasting the data file's contents into a heredoc that the loop reads instead:
#!/usr/bin/env bash
declare -i testno=1 # Make testno an integer variable that uses arithmetic expansion
while IFS=, read -r ip server; do
echo "Testing: $server, IP address: $ip, test $testno"
testno+=1
done <<EOF
10.0.0.1,server1
10.0.0.2,server2
172.16.0.1,server3
192.168.0.1,server4
EOF
which will display
Testing: server1, IP address: 10.0.0.1, test 1
Testing: server2, IP address: 10.0.0.2, test 2
Testing: server3, IP address: 172.16.0.1, test 3
Testing: server4, IP address: 192.168.0.1, test 4
Related
I was trying to upload a file through application i wrote in c.
As i did not find any API, i decided to go through commands.
Input command line looked like this.
ftp -u ftp://ftpuser:password#123#x.x.x.x/test.txt /tmp/test.txt
Whenever a special character is present, login will fail. when i tried with different user without any special characters in the password upload works.
How this issue can be resolved or is there any another method available like API which can be made use of.
If any sample code available then it will be of great help.
Special character means #, $, # (Ex : password#123, password$123)
code snippet:
RunCommandWithPipe(PSTRING CmdLine)
{
FILE *fp;
int status;
fp = popen(CmdLine, "r");
if (fp == NULL)
{
ErrGen(constErrOpenFile);
}
status = pclose(fp);
if (status == -1)
{
ErrGen(constErrCloseFile);
}
}
The reason why this doesn't work is because you are passing unfiltered meta characters into the shell. This is very dangerous. If someone untrustworthy gets to decide the value of any of the parameters to your ftp command, such as the username, password, ftp server, or file name, then that person will be able to run arbitrary shell commands.
You can see what's going on by putting an "echo" in front of your ftp command:
echo ftp -u ftp://ftpuser:password$123#x.x.x.x/test.txt /tmp/test.txt
You'll get this result:
ftp -u ftp://ftpuser:password23#x.x.x.x/test.txt /tmp/test.txt
The shell is trying to evaluate $1 as a variable, leaving an empty result.
There's a couple of things you can do.
1) Make the command safe by escaping all the meta characters. Here you need to be very careful, using a whitelist approach rather than just trying to get rid of the special characters you've thought of. In the whitelist approach you accept that some set of characters are safe, such as [A-Za-z0-9:_-]. Every other character you either strip out or escape by preceding it with a backslash. (eg. "foo:bar$baz&abc" becomes "foo:bar\$bazabc") If you do this way don't try to think of all the characters you know of that are special and escape those. You will most likely forget some, and not handle input this like:
ftp -u ftp://ftpuser:; rm -rf /;echo #x.x.x.x/test.txt /tmp/test.txt
2) Don't pass arguments on the shell, instead control the FTP client through fread()/fwrite() on the pipe that popen() gave you.
In this case what you do is launch the ftp client with no arguments. Then you write "OPEN 192.168.1.1" or wherever you want to connect. Then you write the username. Then you write the password. Then you write the GET or PUT command want. Then you write "EXIT" or write an EOF. You should read the result codes from the server. You'll get 200 series results on success. You'll get a 500 series result if the login is bad, etc.
You still have to watch out when piping into the FTP command because it will take shell escapes like "!rm -rf /", but there is much less opportunity for that than on the shell. You just need to make sure the strings you get to build your FTP commands are one line and that you always precede them with a valid FTP command. You should also watch out for any funny business with untrustworthy filenames. (eg. don't allow absolute paths, "..", and so forth)
You propably using a wrong charset to send the password
Do not understand why my code works if I take out my loop and variables while manually executing each line. First I thought my variables were wrong, but then I tested my code with the variables but no loop and it worked.
If I put back in the loop (the only thing I'm changing), I get these weird stty errors.
while read p; do
#Send file
scp random_file.txt $p:/me/folder"
#Log in
ssh $p"#myserver.txt"
#List file, extract file, append file
#Code here
#log out
exit
done <usernames.txt
I've googled this error (which is a pretty common error) ad nauseam, but none of the solutions are working. Disabling pseudo-tty allocation nor forcing pseudo-tty allocation work for me. I always get an error, no matter the option
-t -t option
tcgetattr: Inappropriate ioctl for device
-t option
Pseudo-terminal will not be allocated because stdin is not a terminal.
stty: : Invalid argument
-T option
stty: : Invalid argument
So how do I get around these stty errors and why does it stop working when I put it in a loop?
The input redirection with <usernames.txt is replacing the standard input with the file usernames.txt. Hence the terminal is no longer the input, causing these errors. One way around this is to use a file descriptor other than standard input, e.g.:
while read p <&3; do
…
done 3<usernames.txt
Another problem you have is that the commands within the loop are executed locally, not over ssh on the remote machine, so the exit will exit your local shell (after you return from ssh by manually logging out). You can put commands to execute remotely on the ssh command line (see ssh manual, or, e.g., this question), which may eliminate your need to have the terminal as standard input in the first place.
ssh while interactive drops you into a remote shell. ssh while in a script does not do that. The body of your loop after the ssh line is not happening on the remote system when scripted this way. It is happening locally.
If you want to run code on the remote machine in the context of that ssh connection then you need to write it all as the command argument to the ssh command and/or write a script on the remote machine and execute that script as the ssh command argument.
I'm trying to automatically retrieve data from a COM port using a batch file.
I'm able to configure the com port and to send the command in other to ask my device for the info.
The problem is that I'm not able to capture the data that the device sends. I've tried with RealTerm and the device is working and sends the info back to the pc, but I really need the batch file to do it automatically, here is the code:
echo off
MODE COMxx ...
COPY retrievecommand.txt \\\\.\COMxx:
COPY \\\\.\COMxx: data.txt
Any suggestions?
Use the TYPE command in a recursive loop using the DOS GOTO command to a DOS LABEL. Use 'append output' to capture text like TYPE COM1:>>Data.txt The double > means continually concatenate (or append) to Data.txt. A single > or 'redirect output' would replace the text in Data.txt every loop (if com data present on port). Add a 2nd line that redirects to the monitor screen so you can watch activity too (i.e. TYPE COM1:>CON [CON means console or monitor screen but you can omit it as console is default anyway])
Control-Z is not needed by TYPE command. It will just dump text continually until operator does a Control-C and then a Y to break the loop. You really don't need to stop the loop unless you are done with the batch file all together. The Data.txt file will be available to other programs live and will not present a 'Sharing Violation' if you try to access it with another program like NOTEPAD.EXE while this batch file is still looping.
Also if you make a 3rd line in the batch file that says TYPE COM1:>Data1.txt [notice only one redirect], you will have a single line of instant text that will disappear with next iteration. But sometimes that is helpful if you need only one line of data. There are creative ways to extract one line of data to another text file using the DOS FIND command.
When reading, the COPY command will continue until it detects the end of file. As the source is a device (with a potentially infinite stream) it only knows to stop when it detects an end of file marker. This is the Ctrl-Z (0x1A) character.
The suggestion in the duplicate question of using the TYPE command to read is likely to result in the same problem.
There is no standard mechanism to read a single line. If you can port your application to PowerShell, you should be able to read single lines with the results you expect.
I am working in OSX and using bash for my shell. I have a script which calls an executable hundreds of times, and each call is independent of the other. Therefore I am going to run this code in parallel. However, each call to the executable appends output to a community text file on a new line.
The ordering of the text file is not of importance (although it would be nice, but totally not worth over complicating since I can just use unix sort command), but what is, is that every call of the executable properly printed to the file. My concern is that if I run the script in parallel that the by some freak accident, two threads will check out the text file, print to it and then save different copies back to the original directory of the text file. Thus nullifying one of the writes to the file.
Does this actually happen, or is my understanding of printing to a file flawed? I don't fully know if this would also be a case by case bases so I will provide some mock code of what is being done in my program below.
Script:
#!/bin/sh
abs=$1
input=$(echo "$abs" | awk '{print 0.004 + 0.005*$1 }')
./program input
"./program":
~~Normal .c file stuff here~~
~~VALUE magically calculated here~~
~~run number is pulled out of input and assigned to index for sorting~~
FILE *fpp;
fpp = fopen("Doc.txt","a");
fprintf(fpp,"%d, %.3f\n", index, VALUE);
fclose(fpp);
~Closing events of program.c~~
Commands to run script in parallel in bash:
printf "%s\n" {0..199} | xargs -P 8 -n 1 ./program
Thanks for any help you guys can offer.
A write() call (like fwrite()) with the append flag set in open() (like during fopen()) is guaranteed to avoid the race condition you describe.
O_APPEND
If set, the file offset shall be set to the end of the file prior to each write.
From: POSIX specifications for open:
opengroup.org open
Race conditions are what you are thinking of.
Not 100% sure but if you simple append to the end of the file rather than opening it and editing it should be right
If you have the option, make your program write to standard output instead of directly to a file. Then you can let the shell merge the output of your programs:
printf "%s\n" {0..199} | parallel -P 8 -n 1 ./program > merged_output.txt
Yeah, that looks like a recipe for disaster. If those processes both hit opening the file at the roughly the same time, only one will "take".
I suggest either (easier) writing to separate files then catting them together when the processing is done, or (harder) sending all results to a consumer process that will write the file for everyone.
I'm looking for the best way to read data from an stdin pipe in C programming.
Problem : I need to seek on this data, ie I need to read data from the start of the stream after reading some data at the end of this same stream.
Small use case : gunzip -c 4GbDataFile.gz | myprogram
Another one :
On local host : nc -l -p 1234 | myprogram
On remote host : gunzip -c 4GbDataFile.gz | nc -q 0 theotherhost 1234
I know that reading from fifo can be done only once. So, at the moment :
I slurp everything from stdin to memory and work from this allocated memory.
It's ugly, but it works. An evident issue is that if someone sends a huge (or a continuous) stream to my app, I'll end with a big allocated memory chunk or I'll run out of memory. (Think about an 8Gb file)
What I thought next :
I set a size limit (maybe user-defined) of that memory chunk. Once I've read this much data from stdin :
Either I stop here : "Errr. Out of memory, bazinga. Forget it." style.
Either I start dumping what I am reading to a file and work from this file once all data is read.
But then, what is the point? I can not find out the origin of the data that I am reading. If this is a local 8Gb file, I'll be dumping it to another 8Gb file on the same system.
So, my question is :
How do you read efficiently a lot of data from an stdinpipe when you have to seek back and forth in it?
Thanks in advance for your answers.
Edit :
My program needs to read metadata somewhere (depending of the file format) in the given file, so that maybe at the end of the stream. Then it may read back other data at the start of the stream, then at another place etc. In short : it needs to have access to any bytes of the data.
An example would be to read data of an archive file without knowing the file format before starting to read from stdin: I need to check the archive metadata, find archive files names and offsets etc.
So I'll make a local copy of stdin content and work from it. Thanks everyone for your inputs ;)
You need to get your requirements clear. If you need to seek() then obviously you can't take input from stdin. If you need to seek() then you should take input file name as argument.
The data structure in your 4GbDataFile just doesn't lend itself to what you want to do. Think outside the box. Don't hammer your program into something it shouldn't even attempt. Try to fix the input format where it is generated so you don't need to seek back 4 GB.
In case you do like hammering: 4GB of in-core memory is pretty expensive. Instead, save the data read from stdin in a file, then open the file (or mmap it) and seek to your heart's content.
I think you should read the infamous Useless Use of Cat Award.
TL;DR: change cat 4gbfile | yourprogram to yourprogram < 4gbfile.
If you really insist on having it work with data from a pipe, you'll have to store it in a temporary file at startup then replace file descriptor 0 with a copy of the fd for the temp file, using dup2.