I try to write a wrapper script for songbook generation using lilypond, latex and sejda-console (for the pdf part). Everything works so far, but I have a problem with sejda that is giving me nuts.
Here is the relevant part of my code:
for %%i in (%f%) do (
sejda-console.bat extractbybookmarks -f ".\%%~ni.pdf" -o "export\%%~ni\%title%.pdf" -l 2 -p [BOOKMARK_NAME] -e "%title%" --overwrite
)
where f is a ";"-separated list of files. This command works for the first file, but fails for all others. I can't find any difference between the commands that sejda receives. Here is my console output:
make_sheet.bat -t "Live it up" --supress *.lytex
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_drums.pdf -o export\book_drums\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Validating parameters.
Starting task (org.sejda.impl.sambox.ExtractByOutlineTask#28701274) execution.
Opening C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\.\book_drums.pdf
Retrieving outline information for level 2 and match regex Live it up
Starting extraction by outline, level 2 and match regex Live it up
Found 0 inherited images and 0 inherited fonts potentially unused
Starting extracting Live it up pages 9 9
Created output temporary buffer C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\.sejdaTmp2789047920522272436.tmp
Appended relevant outline items
Filtering annotations
Skipped acroform merge, nothing to merge
Ending extracting Live it up
Task progress: 0% done
Moving C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\.sejdaTmp2789047920522272436.tmp to C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\Live it up.pdf.
Extraction completed and outputs written to org.sejda.model.output.FileOrDirectoryTaskOutput#478190fc[C:\Users\skr1_\Desktop\Tools\Songbook\Sample\out\export\book_drums\Live it up.pdf]
Task (org.sejda.impl.sambox.ExtractByOutlineTask#28701274) executed in 0 seconds
Completed execution
C:\Users\skr1_\Desktop\Tools\Songbook\Sample>(sejda-console.bat extractbybookmarks -f ".\book_general.pdf" -o "export\book_general\Live it up.pdf" -l 2 -p [BOOKMARK_NAME] -e "Live it up" --overwrite )
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_general.pdf -o export\book_general\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Invalid value (File '.\book_general.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Invalid value (File '.\book_general.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
C:\Users\skr1_\Desktop\Tools\Songbook\Sample>(sejda-console.bat extractbybookmarks -f ".\book_guitar.pdf" -o "export\book_guitar\Live it up.pdf" -l 2 -p [BOOKMARK_NAME] -e "Live it up" --overwrite )
Configuring Sejda 3.2.30
Starting execution with arguments: 'extractbybookmarks -f .\book_guitar.pdf -o export\book_guitar\Live it up.pdf -l 2 -p [BOOKMARK_NAME] -e Live it up --overwrite'
Java version: '1.8.0_151'
Invalid value (File '.\book_guitar.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Invalid value (File '.\book_guitar.pdf' does not exist): --files -f value... : pdf files to operate on. A list of existing pdf files (EX. -f /tmp/file1.pdf or -f /tmp/password_protected_file2.pdf:secret123) (required)
Even worse, if I copy the commands that sejda receives and paste them as arguments for a new command, everything works fine.
I suspect that something is happening with the working directory in between, but I don't get it.
Also, note that the output includes the command for subsequent passes of the for-loop (starting with "(sejda-console.bat ...") though echo is off. It is not included for the first run, however.
I'm not an expert with programming, especially not with batch, and any help would be very appreciated.
I'd suggest that sejda.bat is changing the current directory.
Try
pushd
call sejda.bat ...
popd
Related
I am writing a script in Lua 5.1 for use with a game engine (EDGE).
I need my script to copy about 20 files into a .miz file (which is really a zipped folder with a set structure) and navigate that structure and copy those files in from a non-zipped folder on the hard drive.
Because Windows 11 it the future I need to use NanaZip rather than 7z which isn't W11 supported.
However, all the examples I've found are for using LUA to zip up files, not insert non-zipped files INTO a zip file without unzipping it.
Is this even possible?
Similar to #koyaanisqatsi I tried it with 7z. You didn't comment on our question on why 7z should be avoided nor whether you are even allowed to use os.execute, but it should provide a good starting point:
os.execute("7z a yourZip.zip yourFile.png")
Where a is the flag for Add.
See the manual for other flags like compression: https://linux.die.net/man/1/7z
Windows 11 also have tar that have the option r and u
D:\temp>tar h
tar(bsdtar): manipulate archive files
First option must be a mode specifier:
-c Create -r Add/Replace -t List -u Update -x Extract
Common Options:
-b # Use # 512-byte records per I/O block
-f <filename> Location of archive (default \\.\tape0)
-v Verbose
-w Interactive
Create: tar -c [options] [<file> | <dir> | #<archive> | -C <dir> ]
<file>, <dir> add these items to archive
-z, -j, -J, --lzma Compress archive with gzip/bzip2/xz/lzma
--format {ustar|pax|cpio|shar} Select archive format
--exclude <pattern> Skip files that match pattern
-C <dir> Change to <dir> before processing remaining files
#<archive> Add entries from <archive> to output
List: tar -t [options] [<patterns>]
<patterns> If specified, list only entries that match
Extract: tar -x [options] [<patterns>]
<patterns> If specified, extract only entries that match
-k Keep (don't overwrite) existing files
-m Don't restore modification times
-O Write entries to stdout, don't restore to disk
-p Restore permissions (including ACLs, owner, file flags)
bsdtar 3.5.2 - libarchive 3.5.2 zlib/1.2.5.f-ipp bz2lib/1.0.6
( Above cmd.exe was opened from Lua with: os.execute('cmd') )
You can extract a ZIP with it but not creating one as far as i know.
(tar -xf archive.zip)
But is it a Problem for you to use TAR instead of ZIP?
I'm a C programmer on linux.
I write a program that saves an image in /srv/ftp/preview.png which is updating frequently and i want to create a movie from this updates.
It's timestamp is important for me, e.g if image updates after 3.654 seconds i want movie show this update(frame) after 3.654 seconds too.
I searched in Internet for several hours but i can't find any solution.
I know about ffmpeg but it will convert images(and not one image) to movie without millisecond timestamp.
I found this Question but it seems is not useful in this case.
Is there any tool to do that? if not, please introduce an API in c to write a program myself
You can try to use inotify watch modification on the file and ffmpeg to append file to the movie:
#!/bin/bash
FRAMERATE=1
FILE="/path/to/image.jgp"
while true
do inotifywait -e modify "$FILE"
echo "file changed"
# create temp file name
TMP=$(mktemp)
# copy file
cp "$FILE" "$TMP$
# append copy file to movie
# from https://video.stackexchange.com/q/17228
# if movie already exist
if [ -f movie.mp4 ]
then
# append image to a new movie
ffmpeg -y -i movie.avi -loop 1 -f image2 -t $FRAMERATE -i "$TMP".jpg -f lavfi -t 3 -i anullsrc -filter_complex "[0:v] [1:v] concat=n=2:v=1 [v] " -map "[v]" newmovie.avi
# replace old by new movie
mv newmovie.mp4 movie.mp4
else
#create a movie from one image
ffmpeg -framerate 1 -t $FRAMERATE -i "$TMP" movie.mp4
fi
rm "$TMP"
done
This script must certainly be adapted, (in particular if your framerate is high) but I think you can try to play with it.
One bad thing also that the movie creation will become slower and slower because the movie becomes bigger.
You should to store images of a certain time duration in a directory and convert all at once (like once an hour/day)
If you want to serve a stream instead of creating a video file, you can look at https://stackoverflow.com/a/31705978/1212012
I am using SOLR 5 and I want to scan documents that have no extensions. Unfortunately changing the file to have extensions is not an option in my case.
the command I am using is simply:
$bin/post -c mycore ../foldertobescaned -type application/pdf
the command works fine for documents that do have extension but I am getting:
Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
If renaming the files is not an option, you can use the following script as a workaround until Solr improves its post method. It is a simple bash for loop that submits each file individually and works regardless of the file extension. Note that this script will be slower than using post on the whole folder, because each individual file transfer needs to be initialized.
Save the script below as postFolderToSolr.sh inside your Solr folder (so that Solrs bin/ folder is a subdirectory), make it executable with chmod +x postFolderToSolr.sh and then use it as follows: ./postFolderToSolr.sh mycore /home/user1/foldertobescaned/ application/pdf
Using no arguments or the wrong number of arguments prints a short usage message as help.
#!/bin/bash
set -o nounset
if [ "$#" -ne 3 ]
then
echo "Post contents of a folder to Solr."
echo
echo "Usage: postFolderToSolr.sh <colletionName> </path/to/folder> <MIME>"
echo
exit 1
fi
collection=$1
inputPath=${2%/} # remove suffix / if it exists
mime=$3
for element in $inputPath"/"*; do
bin/post -c $collection -type $mime $element
done
Is it possible to mass rename objects on Google Cloud Storage using gsutil (or some other tool)? I am trying to figure out a way to rename a bunch of images from *.JPG to *.jpg.
Here is a native way to do this in bash with an explanation below, line by line of the code:
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /' | while read line; do bash -c "$line"; done
rm src-rename-list.txt; rm dest-rename-list.txt
The solution pushes 2 lists, one for the source and one for the destination file (to be used in the "gsutil mv" command):
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
The line "gsutil mv " and the two files are concatenated line by line using the below code:
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /'
This then runs each line in a while loop:
while read line; do bash -c "$line"; done
Lastly, clean up and delete the files created:
rm src-rename-list.txt; rm dest-rename-list.txt
The above has been tested against a working Google Storage bucket.
https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames
gsutil supports URI wildcards
EDIT
gsutil 3.0 release note
As part of the bucket sub-directory support we changed the * wildcard to match only up to directory boundaries, and introduced the new ** wildcard...
Do you have directories under bucket? if so, maybe you need to go down to each directories or use **.
gsutil -m mv gs://my_bucket/**.JPG gs://my_bucket/**.jpg
or
gsutil -m mv gs://my_bucket/mydir/*.JPG gs://my_bucket/mydir/*.jpg
EDIT
gsutil doesn't support wildcard for destination so far (as of 4/12/'14)
nether API.
so at this moment you need to retrieve list of all JPG files,
and rename each files.
python example:
import subprocess
files = subprocess.check_output("gsutil ls gs://my_bucket/*.JPG",shell=True)
files = files.split("\n")[:-1]
for f in files:
subprocess.call("gsutil mv %s %s"%(f,f[:-3]+"jpg"),shell=True)
please note that this would take hours.
gsutil does not support parallelized and mass-copy/rename.
You have two options:
use a dataflow process to do the operation
or
use GNU parallel to launch it using several processes
If you use GNU Parallel, it is better to deploy a new instance to do the mass copy/rename operation:
First: - Make a list of files you want to copy/rename (a file with source and destination separated by a space or tab), like this:
gs://origin_bucket/path/file gs://dest_bucket/new_path/new_filename
Second: Launch a new compute instance
Third: Login in that instance and install Gnu parallel
sudo apt install parallel
Third: authorize yourself with google (gcloud auth login) because the service account for compute might not have permissions to move/rename the files.
gcloud auth login
Make the copy (gsutil cp) or move (gsutil mv) operation with parallel:
parallel -j 20 --colsep ' ' gsutil mv {1} {2} :::: file_with_source_destination_uris.txt
This will make 20 parallel runs of the gsutil cp operation.
Yes, it is possible:
Move/rename objects and/or subdirectories
I want to be able to log FFMPEG processes because I am trying to work out how long a minute of video takes to convert to help with capacity planning of my video encoding server. How do I enable logging and where is the log file saved. I have FFMPEG installed on a CentOS LAMP machine.
FFmpeg does not write to a specific log file, but rather sends its output to standard error. To capture that, you need to either
capture and parse it as it is generated
redirect standard error to a file and read that afterward the process is finished
Example for std error redirection:
ffmpeg -i myinput.avi {a-bunch-of-important-params} out.flv 2> /path/to/out.txt
Once the process is done, you can inspect out.txt.
It's a bit trickier to do the first option, but it is possible. (I've done it myself. So have others. Have a look around SO and the net for details.)
I found the below stuff in ffmpeg Docs. Hope this helps! :)
Reference: http://ffmpeg.org/ffmpeg.html#toc-Generic-options
‘-report’ Dump full command line and console output to a file named
program-YYYYMMDD-HHMMSS.log in the current directory. This file can be
useful for bug reports. It also implies -loglevel verbose.
Note: setting the environment variable FFREPORT to any value has the
same effect.
I find the answer.
1/First put in the presets, i have this example "Output format MPEG2 DVD HQ"
-vcodec mpeg2video -vstats_file MFRfile.txt -r 29.97 -s 352x480 -aspect 4:3 -b 4000k -mbd rd -trellis -mv0 -cmp 2 -subcmp 2 -acodec mp2 -ab 192k -ar 48000 -ac 2
If you want a report includes the commands -vstats_file MFRfile.txt into the presets like the example. this can make a report which it's ubicadet in the folder source of your file Source.
you can put any name if you want , i solved my problem "i write many times in this forum" reading a complete .docx about mpeg properties. finally i can do my progress bar reading this txt file generated.
Regards.
ffmpeg logs to stderr, and can log to a file with a different log-level from stderr. The -report command-line option doesn't give you control of the log file name or the log level, so setting the environment variable is preferable.
(-v is a synonym for -loglevel. Run ffmpeg -v help to see the levels. Run ffmpeg -h full | less to see EVERYTHING. Or consult the online docs, or their wiki pages like the h.264 encode guide).
#!/bin/bash
of=out.mkv
FFREPORT="level=32:file=$of.log" ffmpeg -v verbose -i src.mp4 -c:a copy -preset slower -c:v libx264 -crf 21 "$of"
That will trancode src.mp4 with x264, and set the log level for stderr to "verbose", and the log level for out.mkv.log to "status".
(AV_LOG_WARNING=24, AV_LOG_INFO=32, AV_LOG_VERBOSE=40, etc.). Support for this was added 2 years ago, so you need a non-ancient version of ffmpeg. (Always a good idea anyway, for security / bugfixes and speedups)
A few codecs, like -c:v libx265, write directly to stderr instead of using ffmpeg's logging infrastructure. So their log messages don't end up in the report file. I assume this is a bug / TODO-list item.
To log stderr, while still seeing it in a terminal, you can use tee(1).
If you use a log level that includes status line updates (the default -v info, or higher), they will be included in the log file, separated with ^M (carriage return aka \r). There's no log level that includes encoder stats (like SSIM) but not status-line updates, so the best option is probably to filter that stream.
If don't want to filter (e.g. so the fps / bitrate at each status-update interval is there in the file), you can use less -r to pass them through directly to your terminal so you can view the files cleanly. If you have .enc logs from several encodes that you want to flip through, less -r ++G *.enc works great. (++G means start at the end of the file, for all files). With single-key key bindings like . and , for next file and previous file, you can flip through some log files very nicely. (the default bindings are :n and :p).
If you do want to filter, sed 's/.*\r//' works perfectly for ffmpeg output. (In the general case, you need something like vt100.py, but not for just carriage returns). There are (at least) two ways to do this with tee + sed: tee to /dev/tty and pipe tee's output into sed, or use a process substitution to tee into a pipe to sed.
# pass stdout and stderr through to the terminal,
## and log a filtered version to a file (with only the last status-line update).
of="$1-x265.mkv"
ffmpeg -v info -i "$1" -c:a copy -c:v libx265 ... "$of" |& # pipe stdout and stderr
tee /dev/tty | sed 's/.*\r//' >> "$of.enc"
## or with process substitution where tee's arg will be something like /dev/fd/123
ffmpeg -v info -i "$1" -c:a copy -c:v libx265 ... "$of" |&
tee >(sed 's/.*\r//' >> "$of.enc")
For testing a few different encode parameters, you can make a function like this one that I used recently to test some stuff. I had it all on one line so I could easily up-arrow and edit it, but I'll un-obfuscate it here. (That's why there are ;s at the end of each line)
ffenc-testclip(){
# v should be set by the caller, to a vertical resolution. We scale to WxH, where W is a multiple of 8 (-vf scale=-8:$v)
db=0; # convenient to use shell vars to encode settings that you want to include in the filename and the ffmpeg cmdline
of=25s#21.15.${v}p.x265$pre.mkv;
[[ -e "$of.enc" ]]&&echo "$of.enc exists"&&return; # early-out if the file exists
# encode 25 seconds starting at 21m15s (or the keyframe before that)
nice -14 ffmpeg -ss $((21*60+15)) -i src.mp4 -t 25 -map 0 -metadata title= -color_primaries bt709 -color_trc bt709 -colorspace bt709 -sws_flags lanczos+print_info -c:a copy -c:v libx265 -b:v 1500k -vf scale=-8:$v -preset $pre -ssim 1 -x265-params ssim=1:cu-stats=1:deblock=$db:aq-mode=1:lookahead-slices=0 "$of" |&
tee /dev/tty | sed 's/.*\r//' >> "$of.enc";
}
# and use it with nested loops like this.
for pre in fast slow; do for v in 360 480 648 792;do ffenc-testclip ;done;done
less -r ++G *.enc # -r is useful if you didn't use sed
Note that it tests for existence of the output video file to avoid spewing extra garbage into the log file if it already exists. Even so, I used and append (>>) redirect.
It would be "cleaner" to write a shell function that took args instead of looking at shell variables, but this was convenient and easy to write for my own use. That's also why I saved space by not properly quoting all my variable expansions. ($v instead of "$v")
appears that if you add this to the command line:
-loglevel debug
or
-loglevel verbose
You get more verbose debugging output to the command line.
You can find more debugging info just simply adding the option -loglevel debug, full command will be
ffmpeg -i INPUT OUTPUT -loglevel debug -v verbose
You must declare the reportfile as variable for console.
Problem is all the Dokumentations you can find are not running so ..
I was give 1 day of my live to find the right way ....
Example: for batch/console
cmd.exe /K set FFREPORT=file='C:\ffmpeg\proto\test.log':level=32 && C:\ffmpeg\bin\ffmpeg.exe -loglevel warning -report -i inputfile f outputfile
Exemple Javascript:
var reortlogfile = "cmd.exe /K set FFREPORT=file='C:\ffmpeg\proto\" + filename + ".log':level=32 && C:\ffmpeg\bin\ffmpeg.exe" .......;
You can change the dir and filename how ever you want.
Frank from Berlin
If you just want to know how long it takes for the command to execute, you may consider using the time command. You for example use time ffmpeg -i myvideoofoneminute.aformat out.anotherformat