I rendered 140000 frames to create a movie of them.
However, it starts at 1.png where it would be better if it had started at 000001.png in order to keep the order good when importing it in final cut express.
I used to have a program r-name but that was based on power-pc so it doesn't work anymore.
Also the program was quite shit with even a batch of 300 files for example so i guess it would be better to use the terminal for that.
I have seen examples for renaming but most where for changing the extention for example or change a prefix.
Could someone help me with the right terminal script? I need to finish this project asap, else i would have re-rendered it but it takes 15 hours to do.
Not very efficient, but since you need to run it only once:
for i in `seq 1 140000`; do
mv $i.png `printf %06d $i`.png
done
EDIT: I assumed (maybe wrongly) that you were using Linux. This won't work on Windows.
EDIT: Yes, this should work in Mac OS X. Instead of typing these lines into the prompt, you can save it to a file. Usually, you would save such a file with a name like rename.sh. Then You can run it on the terminal like this:
sh rename.sh
If you are unsure, you can change the mv line into:
echo mv $i.png `printf %06d $i`.png
This will print out on the screen the commands that would be executed. Then if everything looks ok, you change it back to the original and run it again.
If the number of files is different, just replace 140000 with the number of the last file.
for i in *.png
do
name=${i%.png}
[[ $name =~ ^[0-9]+$ ]] && mv $i "$(printf '%06d' $name).png"
done
If you are using Windows
#echo off
setlocal enableDelayedExpansion
for %%F in (*.png) do (
set "name=00000%%~nF"
ren "%%F" "!name:~-6!.png"
)
Related
I am trying to loop through an array of directories using a bash script so I can list directories with their timestamp, ownership etc using ls -arlt. I am reviewing bash so would like some feedback.
It works with declare -a for those indirect references but for each directory it outputs and extra directory from the /home/user.
I tried to use declare -n and declare -r for each directory and doesn't work.
#!/bin/bash
# Bash variables
acpi=/etc/acpi
apm=/etc/apm
xml=/etc/xml
array=( acpi apm xml )
# Function to display timestamp, ownership ...
displayInfo()
{
for i in "${array[#]}"; do
declare -n curArray=$i
if [[ -d ${curArray} ]]; then
declare -a _acpi=${curArray[0]} _apm=${curArray[1]} _xml=${curArray[2]}
echo "Displaying folder apci: "
cd $_acpi
ls -alrt
read -p "Press enter to continue"
echo "Displaying folder apm: "
cd $_apm
ls -alrt
read -p "Press enter to continue"
echo "Displaying folder xml: "
cd $_xml
ls -alrt
read -p "Press enter to continue"
else
echo "Displayed Failed" >&2
exit 1
fi
done
}
displayInfo
exit 0
It outputs an extra directory listing the /home/user and don't want that output.
There are a lot of complex and powerful shell features being used here, but in ways that don't fit together or make sense. I'll go over the mistakes in a minute, first let me just give how I'd do it. One thing I will use that you might not be familiar with is indirect variable references with ${!var} -- this is like using a nameref variable, but IMO it's clearer what's going on.
acpi=/etc/acpi
apm=/etc/apm
xml=/etc/xml
array=( acpi apm xml )
displayInfo()
{
for curDirectory in "${array[#]}"; do
if [[ -d ${!curDirectory} ]]; then
echo "Displaying folder $curDirectory:"
ls -alrt "${!curDirectory}"
read -p "Press enter to continue"
else
echo "Error: ${!curDirectory} does not exist or is not a directory" >&2
exit 1
fi
done
}
displayInfo
(One problem with this is that it does the "Press enter to continue" thing after each directory, rather than just between them. This can be fixed, but it's a little more work.)
Ok, now for what went wrong with the original. My main recommendation for you would be to try mentally stepping through your code to see what it's doing. It can help to put set -x before it, so the shell will print its interpretation of what it's doing as it runs, and see how it compares to what you expected. Let's do a short walkthrough of the displayInfo function:
for i in "${array[#]}"; do
This will loop over the contents of array, so on the first pass through the loop i will be set to "acpi". Good so far.
declare -n curArray=$i
This creates a nameref variable pointing to the other variable acpi -- this is similar to what I did with ${! }, and basically reasonable so far. Well, with one exception: the name suggests it's an array, but acpi is a plain variable, not an array.
if [[ -d ${curArray} ]]; then
This checks whether the contents of the acpi variable, "/etc/acpi" is the path of an existing directory (which it is). Still doing good.
declare -a _acpi=${curArray[0]} _apm=${curArray[1]} _xml=${curArray[2]}
Here's where things go completely off the rails. curArray points to the variable acpi, so ${curArray[0]} etc are equivalent to ${acpi[0]} etc. But acpi isn't an array, it's a plain variable, so ${acpi[0]} gets its value, and ${acpi[1]} and ${acpi[2]} get nothing. Furthermore, you're using declare -a (declare arrays), but you're just assigning single values to _acpi, _apm, and _xml. They're declared as arrays, but you're just using them as plain variables (basically the reverse of how you're using curArray -> acpi).
There's a deeper confusion here as well. The for loop above is iterating over "acpi", "apm", and "xml", and we're currently working on "acpi". During this pass through the loop, you should only be working on acpi, not also trying to work on apm and xml. That's the point of having a for loop there.
Ok, that's the main problem here, but let me just point out a couple of other things I'd consider bad practice:
cd $_apm
ls -alrt
Using a variable reference without double-quotes around it like this invites parsing confusion; you should almost always put double-quotes, like cd "$_apm". Also, using cd in a script is dangerous because if it fails the rest of the script will execute in the wrong place. In this case, _apm is empty, so without double-quotes it's equivalent to just cd, which moves to your home directory. This is why you're getting that result. If you used cd "$_apm" it would get an error instead... but since you don't check for that it'll go ahead and still list an irrelevant location.
It's almost always better to avoid cd and its complications entirely, and just use explicit paths, like ls -alrt "$_apm".
echo "Displayed Failed" >&2
exit 1
Do you actually want to exit the entire script if one of the directories doesn't exist? It'd make more sense to me to just return 1 (which exits just the function, not the entire script), or better yet continue (which just goes on to the next iteration of the loop -- i.e. the next directory on the list). I left the exit in my version, but I'd recommend changing it.
One more similar thing:
acpi=/etc/acpi
apm=/etc/apm
xml=/etc/xml
array=( acpi apm xml )
Is there any actual reason to use this array -> variable name -> actual directory path system (and resulting indirect expansion or nameref complications), rather than just having an array of directory paths, like this?
array=( /etc/acpi /etc/apm /etc/xml )
I left the indirection in my version above, but really if there's no reason for it I'd remove the complication.
I've gotten some help with an earlier part of this batch file, but now I'm having trouble with the final component.
I've tried a few things with no success. I tried changing the CRLF to LF which did nothing. I also tried rephrasing the commands a few ways but I am still not getting anywhere. The following is my main batch file.
#echo on
REM delete deauth command file
SET OutFile="C:\temp\Out2.txt"
IF EXIST "%OutFile%" DEL "%OutFile%"
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\WirelessDump.txt" > "C:\temp\output.txt"
setlocal
for /f %%a in (C:\temp\output.txt) do >> "Out2.txt" echo wir cli mac-address %%a deauth forced
REM Use commands in out2 to deauth
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\Out2.txt"
pause
Below this sentence is the command found in Out2 which I think is giving the actual trouble. The number of lines varies but they are all this particular command just with differing MACs.
wir cli mac-address xxxx.xxxx.xxxx deauth forced
If Out2 has only a single line it runs fine, no issues. But when there are multiple lines, it fails with an error stating that the Line has an invalid autocommand. It's almost as if it was reading it as one contiguous command. As I mentioned above I changed from CRLF to LF hoping IOS would like it better, but that failed. I've tried adding extra lines between the commands, and I've tried calling the login every time from that file.
I am hoping that there is a way to tailor the commands to pass all lines one at a time to keep this down to a minimum of files.
I had another thought but it is kinda/very clunky. If there was a way to output each of those MAC deauth commands to their own file in a saperate folder (out1, out2, out3), and have the BAT able to run all the randomly generated files in that folder so that each one is a separated plink session.
Let me know if I need to change/add/elaborate on anything. Thanks in advance for anything you guys are willing to help with. I appreciate it.
EDIT: Martin has pointed out what the limitation actually is. It appears to be a limitation on Cisco to accept blocks of commands through SSH. So I still have the same question really, I just need some help figuring a workaround to this issue. I'm thinking the multiple file solution I mentioned above may have some possibility. But I'm too much of a noob to know how to make that work. I'll update if I have any breakthroughs though. Thanks for any contributions!
It's actually a known limitation of Cisco, that it does not support multiple commands in an SSH "exec" channel command.
Quoting section 3.8.3.6 -m: read a remote command or script from a file of PuTTY/Plink manual:
With some servers (particularly Unix systems), you can even put multiple lines in this file and execute more than one command in sequence, or a whole shell script; but this is arguably an abuse, and cannot be expected to work on all servers. In particular, it is known not to work with certain ‘embedded’ servers, such as Cisco routers.
Though you can probably still feed multiple commands to Plink input:
(
echo command 1
echo command 2
echo command 3
echo exit
) | plink -v -ssh user#host -pw password > output.txt
Or you can simply use an input file:
plink -v -ssh user#host -pw password < input.txt > output.txt
Similar question: A way of typing multiple commands in cmd.txt file using PuTTY batch against Cisco
This works without cmd.exe and using files:
function Invoke-PlinkCommandsIOS {
param (
[Parameter(Mandatory=$true)][string] $Host,
[Parameter(Mandatory=$true)][System.Management.Automation.PSCredential] $Credential,
[Parameter(Mandatory=$true)][string] $Commands,
[Switch] $ConnectOnceToAcceptHostKey = $false
)
$PlinkPath="$PSScriptRoot\plink.exe"
$commands | & "$PSScriptRoot\plink.exe" -ssh -2 -l $Credential.GetNetworkCredential().username -pw "$($Credential.GetNetworkCredential().password)" $Host -batch
}
Usage: dont forget your exit's and terminal length 0 or it will hang
PS C:\> $Command = "terminal lenght 0
>> show running-config
>> exit
>> "
>>
PS C:\> Invoke-PlinkCommandsIOS -Host ace-dc1 -Credential $cred -Commands $Command
....
Sounds like your file 'Out2.txt' has only LF at end of line. Simple way to convert that to CRLF is to use MORE command and redirect output to a new file and then use the new file.
more Out2.txt > Out2CRLF.txt
I ran into the same issue when trying to pull the full list of ACLs on an ASA via plink in powershell.
Essentially, due to the abuse issue referenced in the documentation: https://the.earth.li/~sgtatham/putty/0.72/htmldoc/Chapter3.html#using-cmdline-m, I was getting inconsistent results in pulling the ACLs. Sometimes I would get 0, sometimes only 1 or 2, and sometimes I would get all of them. (I personally, had about a 1 in 5 success rate).
As I would occasionally be successful I used a while loop that would catch the unsuccessful attempts and retry. Just be sure to put some timing on the while loop to prevent it from spamming ssh connections too much.
It is not a good solution, but it worked as a last resort.
I'm trying to write a .bat script as the pre-commit hooks in svn. However when I was trying to use the svnlook cat command with the -t option, it's not working. It kept telling me syntax errors. I tried everything including adding quotes, changing the -t option etc. However, if I remove the -t option, it doesn't report syntax errors.
So this is the error scripts:
SET REPOS=%~1 (I want to remove the quotes of the path)
SET TXN=%2
"C:\Program Files (x86)\VisualSVN Server\bin\svnlook.exe" cat -t %TXN% %REPOS% myworkingdir/txtIwanttoread
If I do the following, they are all fine:
SET REPOS=%~1 (I want to remove the quotes of the path)
SET TXN=%2
"C:\Program Files (x86)\VisualSVN Server\bin\svnlook.exe" cat %REPOS% myworkingdir/txtIwanttoread
OR
SET REPOS=%~1 (I want to remove the quotes of the path)
SET TXN=%2
"C:\Program Files (x86)\VisualSVN Server\bin\svnlook.exe" cat -r 28 %REPOS% myworkingdir/txtIwanttoread
Somebody please help me!! Thanks!
Never mind everybody, I think I just figured it out myself. We should use SET TXN=%~2 to eliminate the quotes. Also even if I did that, the stupid batch puts a space at the end of the variable TXN. This is what causes the problem. So the script should look like:
SET REPOS=%~1 (I want to remove the quotes of the path)
SET TXN=%~2
SET TXN=%TXN: =% (deblank)
"C:\Program Files (x86)\VisualSVN Server\bin\svnlook.exe" cat -t %TXN% %REPOS% myworkingdir/txtIwanttoread
I've written a code for searching a specific file , where the user enters a starting path and a filename , and then the program prints its details if the file exists , or prints not found otherwise.
The code is based on recursion . I want to test it with a large folder hierarchy , let's say 1000 folders , one inside the other , and put a file called david.txt inside the 1000th folder .
How can I do that without creating 1000 times folders for the next 3 hours ?
The code is written in C , under Ubuntu .
Thanks
Type the following in your shell:
mkdir -p folder$( seq -s "/folder" 999 )1000
Then you can enter this folder:
cd folder$( seq -s "/folder" 999 )1000
and create a file:
touch david.txt
and come back to your previous dir:
cd -
As some comments described, I would use the shell for such purposes:
#!/bin/sh
for i in $(seq 1000)
do
mkdir tst
cd tst
done
touch david.txt
On a related topic, let me suggest this article, which shows how sometimes shell scripting can solve your problems in much less development time. Specially for ad-hoc problems like this one.
Simple bash loop:
$ pushd .
$ for i in {1..1000}; do
mkdir d$i;
cd d$i;
done
$ touch david.txt
$ popd
Use the same code, (almost), to create the folders and files. Once that is working, the searching/reporting is almost done as well. It's sorta self-testing :)
I just got knocked down after our server has been updated from Debian 4 to 5.
We switched to UTF-8 environment and now we have problems getting the text printed correctly on the browser, because all files are in non-utf8 encodings like iso-8859-1, ascii, etc.
I tried many different scripts.
The first one I tried is "iconv". That one doesn't work, it changes the content, but the file's encoding is still non-utf8.
Same problem with enca, encamv, convmv and some other tools I installed via apt-get.
Then I found a python code, which uses chardet Universal Detector module, to detect encoding of a file (which works fine), but using the unicode class or the codec class to save it as utf-8 doesn't work, without any errors.
The only way I found to get the file and its content converted to UTF-8, is vi.
These are the steps I do for one file:
vi filename.php
:set bomb
:set fileencoding=utf-8
:wq
That's it. That one works perfect. But how can I get this running via a script?
I would like to write a script (Linux shell) which traverses a directory taking all php files, then converting them using vi with the commands above.
As I need to start the vi app, I do not know how to do something like this:
"vi --run-command=':set bomb, :set fileencoding=utf-8' filename.php"
Hope someone can help me.
This is the simplest way I know of to do this easily from the command line:
vim +"argdo se bomb | se fileencoding=utf-8 | w" $(find . -type f -name *.php)
Or better yet if the number of files is expected to be pretty large:
find . -type f -name *.php | xargs vim +"argdo se bomb | se fileencoding=utf-8 | w"
You could put your commands in a file, let's call it script.vim:
set bomb
set fileencoding=utf-8
wq
Then you invoke Vim with the -S (source) option to execute the script on the file you wish to fix. To do this on a bunch of files you could do
find . -type f -name "*.php" -exec vim -S script.vim {} \;
You could also put the Vim commands on the command line using the + option, but I think it may be more readable like this.
Note: I have not tested this.
You may actually want set nobomb (BOM = byte order mark), especially in the [not windows] world.
e.g., I had a script that didn't work as there was a byte order mark at the start. It isn't usually displayed in editors (even with set list in vi), or on the console, so its difficult to spot.
The file looked like this
#!/usr/bin/perl
...
But trying to run it, I get
./filename
./filename: line 1: #!/usr/bin/perl: No such file or directory
Not displayed, but at the start of the file, is the 3 byte BOM. So, as far as linux is concerned, the file doesn't start with #!
The solution is
vi filename
:set nobomb
:set fileencoding=utf-8
:wq
This removes the BOM at the start of the file, making it correct utf8.
NB Windows uses the BOM to identify a text file as being utf8, rather than ANSI. Linux (and the official spec) doesn't.
The accepted answer will keep the last file open in Vim. This problem can be easily resolved using the -c option of Vim,
vim +"argdo set bomb | set fileencoding=utf-8 | w" -c ":q" file1.txt file2.txt
If you need only process one file, the following will also work,
vim -c ':set bomb' -c ':set fileencoding=utf-8' -c ':wq' file1.txt