How can redirect outputs from a sequence of commands to stdout and to a file in Tcl? - file

Instead of doing something like below, I am looking for an elegant way to redirect outputs from a sequence of commands to stdout and to a file simultaneously in Tcl if possible.
e.g. two step method
proc foo {} { return "bar" }
puts "I call foo"
foo
set i_f [open myfile w]
puts $i_f "I call foo"
exec echo [foo] >> myfile
If I write first and then print the contents of that file, it would take a bit more time to read and print the contents out, so I am wondering if there is an elegant method to simultaneously write to two channels (to an opened file for writing and to stdout).
Moreover, I would like to know if it is possible to change the current stdout to another channel temporarily and then change the stdout back to the original stdout.
There are multiple Tcl proc calls where all of them are outputting to stdout (either through plain puts or through Tcllib's report package).
Instead of reviewing and changing the contents of each Tcl procs, I would like to instead output them to another channel for writing to a file and then change the stdout to the original stdout.

You may create a new puts command to replace the original puts. In the new command, puts message to stdout and a file.
rename puts _puts ;# Replace Tcl build-in puts
set chan_log [open log_file w]
# Create a new puts command
proc ::puts { msg } {
::_puts $msg
::_puts $::chan_log $msg
}
puts "Hello, world!"
close $chan_log

If you're on any Unix, you can use the dup command from TclX.
package require Tclx
# Redirect stdout to a file
set alternate [open /some/place/you-want-to/capture.to w+]
set saved [dup stdout]
dup $alternate stdout
try {
# Do your code here
} finally {
# Restore stdout to wherever it was before
dup $saved stdout
close $saved
}
# Now you can use [seek $alternate 0] and read the data written
You are recommended to use try/finally when doing manipulation of stdout (or any other standard channel) so that you ensure it is restored even in the event of an error. This prevents all sorts of confusing situations.

Related

Writing file in Tcl gives extra line

I am wondering from where newline (4th in example code) is written out from following very simple tcl code. Handling from puts -nonewline is cumbersome. Is there any other tcl command influence this behavior?
set fid [open testout.txt w]
puts $fid 1
puts $fid 2
puts $fid 3
close $fid
Output:
#1:1
#2:2
#3:3
#4:
The puts command always appends a newline to the end of what you ask it to write, unless you pass the -nonewline option. It is a feature of that command, and is most of the time what you tend to want. (The puts command is the only standard Tcl command that writes data to a channel; chan puts is just a different name for the same thing.)
In your case, maybe you don't want the newline at the end of the final line (and should use the option). Or maybe you want to trim the newline from the end before splitting the text into lines when reading it back in. Whether you can tolerate that newline character at the end of the text data in the file depends on what you're doing with it.

Results in Pyhton terminal printed (i.e., using the print function) to a .txt file (ideally created with the open function)

Here's my situation: after running a Python file in VS Code I get a bunch of results in the terminal (in my case, several integers). Now, I want to export what is displayed on the terminal to a txt. file.
I have tried this:
import sys
f = open("out.text", 'w')
sys.stdout = f
print ("out.text", file=f)
f.close()
Basically, I am looking for something like this:
with open('out.txt', 'w') as f:
print('The data I see in the terminal window', file=f)
Now, how do i get this object: 'The data I see in the terminal window'?
P.S. I am very new to programming.
You can use the write method to write to a file:
with open("out.text", "w") as f:
f.write("out.text\n")
f.write("Something else\n")
You need the \n to end the line and print the "Something else" to the next line.
You can use
with open("out.text", "a") as f:
to append the following write statements to the contents of the file.
To do both, i.e., print to the terminal and write to a file, you would need two commands, one for each:
with open("out.text", "w") as f:
print("out.text")
f.write("out.text\n")
print("Something else")
f.write("Something else\n")
Alternatively, you could redirect the terminal output when calling your script
python script.py > out.text
But this will not print anything on the terminal anymore either, but redirect everything to the file.
There is at least one other question on here that deals with this.
Have a look here:
How to redirect 'print' output to a file?
And additionally search a bit more.
There are several solutions to your problem, but having it both ways is gonna be tricky.

Confusion when assigning a command-line statement to a Perl array [duplicate]

Can someone please help me? In Perl, what is the difference between:
exec "command";
and
system("command");
and
print `command`;
Are there other ways to run shell commands too?
exec
executes a command and never returns.
It's like a return statement in a function.
If the command is not found exec returns false.
It never returns true, because if the command is found it never returns at all.
There is also no point in returning STDOUT, STDERR or exit status of the command.
You can find documentation about it in perlfunc,
because it is a function.
system
executes a command and your Perl script is continued after the command has finished.
The return value is the exit status of the command.
You can find documentation about it in perlfunc.
backticks
like system executes a command and your perl script is continued after the command has finished.
In contrary to system the return value is STDOUT of the command.
qx// is equivalent to backticks.
You can find documentation about it in perlop, because unlike system and execit is an operator.
Other ways
What is missing from the above is a way to execute a command asynchronously.
That means your perl script and your command run simultaneously.
This can be accomplished with open.
It allows you to read STDOUT/STDERR and write to STDIN of your command.
It is platform dependent though.
There are also several modules which can ease this tasks.
There is IPC::Open2 and IPC::Open3 and IPC::Run, as well as
Win32::Process::Create if you are on windows.
In general I use system, open, IPC::Open2, or IPC::Open3 depending on what I want to do. The qx// operator, while simple, is too constraining in its functionality to be very useful outside of quick hacks. I find open to much handier.
system: run a command and wait for it to return
Use system when you want to run a command, don't care about its output, and don't want the Perl script to do anything until the command finishes.
#doesn't spawn a shell, arguments are passed as they are
system("command", "arg1", "arg2", "arg3");
or
#spawns a shell, arguments are interpreted by the shell, use only if you
#want the shell to do globbing (e.g. *.txt) for you or you want to redirect
#output
system("command arg1 arg2 arg3");
qx// or ``: run a command and capture its STDOUT
Use qx// when you want to run a command, capture what it writes to STDOUT, and don't want the Perl script to do anything until the command finishes.
#arguments are always processed by the shell
#in list context it returns the output as a list of lines
my #lines = qx/command arg1 arg2 arg3/;
#in scalar context it returns the output as one string
my $output = qx/command arg1 arg2 arg3/;
exec: replace the current process with another process.
Use exec along with fork when you want to run a command, don't care about its output, and don't want to wait for it to return. system is really just
sub my_system {
die "could not fork\n" unless defined(my $pid = fork);
return waitpid $pid, 0 if $pid; #parent waits for child
exec #_; #replace child with new process
}
You may also want to read the waitpid and perlipc manuals.
open: run a process and create a pipe to its STDIN or STDERR
Use open when you want to write data to a process's STDIN or read data from a process's STDOUT (but not both at the same time).
#read from a gzip file as if it were a normal file
open my $read_fh, "-|", "gzip", "-d", $filename
or die "could not open $filename: $!";
#write to a gzip compressed file as if were a normal file
open my $write_fh, "|-", "gzip", $filename
or die "could not open $filename: $!";
IPC::Open2: run a process and create a pipe to both STDIN and STDOUT
Use IPC::Open2 when you need to read from and write to a process's STDIN and STDOUT.
use IPC::Open2;
open2 my $out, my $in, "/usr/bin/bc"
or die "could not run bc";
print $in "5+6\n";
my $answer = <$out>;
IPC::Open3: run a process and create a pipe to STDIN, STDOUT, and STDERR
use IPC::Open3 when you need to capture all three standard file handles of the process. I would write an example, but it works mostly the same way IPC::Open2 does, but with a slightly different order to the arguments and a third file handle.
Let me quote the manuals first:
perldoc exec():
The exec function executes a system command and never returns-- use system instead of exec if you want it to return
perldoc system():
Does exactly the same thing as exec LIST , except that a fork is done first, and the parent process waits for the child process to complete.
In contrast to exec and system, backticks don't give you the return value but the collected STDOUT.
perldoc `String`:
A string which is (possibly) interpolated and then executed as a system command with /bin/sh or its equivalent. Shell wildcards, pipes, and redirections will be honored. The collected standard output of the command is returned; standard error is unaffected.
Alternatives:
In more complex scenarios, where you want to fetch STDOUT, STDERR or the return code, you can use well known standard modules like IPC::Open2 and IPC::Open3.
Example:
use IPC::Open2;
my $pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some', 'cmd', 'and', 'args');
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
Finally, IPC::Run from the CPAN is also worth looking at…
What's the difference between Perl's backticks (`), system, and exec?
exec -> exec "command"; ,
system -> system("command"); and
backticks -> print `command`;
exec
exec executes a command and never resumes the Perl script. It's to a script like a return statement is to a function.
If the command is not found, exec returns false. It never returns true, because if the command is found, it never returns at all. There is also no point in returning STDOUT, STDERR or exit status of the command. You can find documentation about it in perlfunc, because it is a function.
E.g.:
#!/usr/bin/perl
print "Need to start exec command";
my $data2 = exec('ls');
print "Now END exec command";
print "Hello $data2\n\n";
In above code, there are three print statements, but due to exec leaving the script, only the first print statement is executed. Also, the exec command output is not being assigned to any variable.
Here, only you're only getting the output of the first print statement and of executing the ls command on standard out.
system
system executes a command and your Perl script is resumed after the command has finished. The return value is the exit status of the command. You can find documentation about it in perlfunc.
E.g.:
#!/usr/bin/perl
print "Need to start system command";
my $data2 = system('ls');
print "Now END system command";
print "Hello $data2\n\n";
In above code, there are three print statements. As the script is resumed after the system command, all three print statements are executed.
Also, the result of running system is assigned to data2, but the assigned value is 0 (the exit code from ls).
Here, you're getting the output of the first print statement, then that of the ls command, followed by the outputs of the final two print statements on standard out.
backticks (`)
Like system, enclosing a command in backticks executes that command and your Perl script is resumed after the command has finished. In contrast to system, the return value is STDOUT of the command. qx// is equivalent to backticks. You can find documentation about it in perlop, because unlike system and exec, it is an operator.
E.g.:
#!/usr/bin/perl
print "Need to start backticks command";
my $data2 = `ls`;
print "Now END system command";
print "Hello $data2\n\n";
In above code, there are three print statements and all three are being executed. The output of ls is not going to standard out directly, but assigned to the variable data2 and then printed by the final print statement.
The difference between 'exec' and 'system' is that exec replaces your current program with 'command' and NEVER returns to your program. system, on the other hand, forks and runs 'command' and returns you the exit status of 'command' when it is done running. The back tick runs 'command' and then returns a string representing its standard out (whatever it would have printed to the screen)
You can also use popen to run shell commands and I think that there is a shell module - 'use shell' that gives you transparent access to typical shell commands.
Hope that clarifies it for you.

Hijacking system("/bin/sh") to run arbitrary commands

I'm trying to perform a privilege escalation attack using a binary which performs the call:
system("/bin/sh");
Is there a way to pass commands as "arguments" or such with the opened shell?
(I don't see it opening, I guess it runs and dies as soon as it has nothing to do which is immediately).
Edit: I Cannot edit the code. It's compiled already.
If you execute
system("/bin/bash");
the shell enters into interactive mode. It reads commands from standard input and writes answers to standard output. The standard input and output is inherited from the calling (your) program. Your program will wait until the shell finishes (i.e. until you enter the command exit or you type ^D at the beginning of line). The shell will run with the same privileges as the calling program.
If you control stdin
What you'll need to do is connect stdin to something that will, when read, provide a source of commands before invoking that code.
I'm writing the below in bash, but you can convert it to whatever language you actually intend to do this in:
# create a file with the commands you want to run
cat >/tmp/commands <<'EOF'
echo "Hello world"
EOF
# open that file and copy its file descriptor to FD 0 (stdin)
exec </tmp/commands
# then invoke your compiled executable that starts a shell.
run-your-command-that-starts-a-shell
If the program controls or overrides its stdin
Another option is to pass ENV with the name of a file to source:
cat >/tmp/commands <<'EOF'
echo "Hello world"
EOF
ENV=/tmp/commands run-your-command-that-starts-a-shell

Deleting contents of a file in Tcl

I'm having some troubles deleting the contents of a text file. From what I can tell, I cannot seem to rename or delete this file and create a new one with the same name, due to permission issues with the PLM software we use. Unfortunately, I am on my own here, since no one seems to know what exactly is wrong.
I can read and write to this file, however. So I've been looking at the seek command and doing something like this:
set f [open "C:/John/myFile.txt" "a+"]
seek $f 0
set fp [tell $f]
seek $f 0 end
set end [tell $f]
# Restore current file pointer
seek $f $fp
while { $fp < $end } {
puts -nonewline $f " "
incr fp
}
close $f
This seems to replace all the lines with spaces, but I'm not sure this is the correct way to approach this. Can someone give me some pointers? I'm still relatively new to Tcl.
Thanks!
If you've got at least Tcl 8.5, open the file in r+ or w+ mode (experimentation may be required) and then use chan truncate:
chan truncate $f 0
If you're using 8.4 or before, you have instead to do this (and it only works for truncating to empty):
close [open $thefilename "w"]
(The w mode creates the file if it doesn't exist, and truncates it to empty on open if it does. The rest of the program might or might not like this!)
Note however that this does not reset where other channels open on the file think they are. This can lead to strange effects (such as writing at a large offset, with the OS filling out the preceding bytes with zeroes) even without locking.
close [open $path w]
And voila, an empty file. If this file does not yet exist, it will be created.
A really easy way to do this is to just over-write your file with an empty file. For example create an empty file (you can do this manually or using the following TCL code):
set blank_file [open "C:/tmp/blank.txt" "w"]
close $blank_file
Then just over-write your original file with the blank file as follows:
file rename -force "C:/tmp/blank.txt" "C:/John/myFile.txt"
Of course, you may have permissions problems if something else has grabbed the file.
You say the file is opened exclusively with another process but you can write to it ?! I think you have permission problems. Are you using Linux or Unix ?! (It seems it is a Windows system but permission problems usually occur on Linux/Unix systems, It is weird, isn't it ?!)
The file is not exclusively opened if you are able to read and write to it and you may have no appropriate permission to delete the file.
Also it is better to test the code on a file you know you have all permissions on it. If the code is working you can focus on your target file. Also you can Google for 'how to file operations in Tcl'. Read this Manipulating Files With Tcl

Resources