How to concatenate two AAC files smoothly? - c

To save time, I want to segment and transcode a large video file on multiple computers.
I use ffmpeg to transcode the segments of the large video file with:
ffmpeg -i large_movie.mp4 -ss 00:00:00 -t 00:01:00 -acodec libfaac seg0.flv
ffmpeg -i large_movie.mp4 -ss 00:01:00 -t 00:01:00 -acodec libfaac seg1.flv
...
And concatenate the segments with:
ffmpeg -i concat.conf -vcodec copy -acodec copy result.flv
The content of the concat.conf:
ffsconcat version 1.0
file seg0.flv
file seg1.flv
...
Now we get a result FLV file result.flv content all the segments. But when I play this file, I found the segment boundary audio may be momentarily interrupted ! I'm sure those segments is closely associated, and the timestamp is right.
When I decode the AAC sample in segment file to a wave format, and open the wave with CoolEdit, I found at the front and the end of the file, the value of audio sample is very small (mute?) ! At the front of the file, there is about 21ms 'mute' sample. And at the end of the file, there is about 3ms 'mute' sample.
Is the mute samples result the momentarily interrupt ? How to concatenate media file containing AAC smoothly ?
After further testing, I found if you split a wave file to small wave files, then encode this small wave files to small aac file use faac:
faac -P -R 48000 -B 16 -C 2 -X -o 1.aac 1.wav
faac -P -R 48000 -B 16 -C 2 -X -o 2.aac 2.wav
faac -P -R 48000 -B 16 -C 2 -X -o 3.aac 3.wav
faac -P -R 48000 -B 16 -C 2 -X -o 4.aac 4.wav
faac -P -R 48000 -B 16 -C 2 -X -o 5.aac 5.wav
The console output like this:
[hugeice#fedora19 trans]$ faac -P -R 48000 -B 16 -C 2 -X -o 5.aac 5.wav
Freeware Advanced Audio Coder
FAAC 1.28
Quantization quality: 100
Bandwidth: 16000 Hz
Object type: Low Complexity(MPEG-2) + M/S
Container format: Transport Stream (ADTS)
Encoding 5.wav to 5.aac
frame | bitrate | elapsed/estim | play/CPU | ETA
And concatenate this small aac files to a big aac files use:
cat 1.aac 2.aac 3.aac 4.aac 5.aac > big.aac
Now, if you play the big.aac, there is a momeniary interrupt at the segment boundary!
The question becomes how segment coding and concatenate aac files smoothly ?

Related

How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion

I want to make a long video from a single image in ffmpeg.
I need it to be fastly encodeable and at the end the video should have a small file size.
Is it possible to fill the video with frames that point to the preivous(or the first) frame with no changes?
I tried with this code, but it was slow and made a big file:
ffmpeg -loop 1 -i image.jpg -c:v libx264 -tune stillimage -shortest -preset ultrafast -t 3600 output.mp4
You can do this in two steps:
1) Encode a short loop, say, 30 seconds.
ffmpeg -loop 1 -framerate 5 -i image.jpg -pix_fmt yuv420p -c:v libx264 -t 30 looped.mp4
2) Loop the encode for desired duration.
ffmpeg -stream_loop -1 -i looped.mp4 -c copy -t 3600 output.mp4
Maybe this will help - enter it all on a single line and it will stream a single image (called test.jpg) repeatedly. See: https://stackoverflow.com/a/71885708/18795194 for my post and an explanation of the parameters.
ffmpeg
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test

FFMPEG vstack and loop

I would like to stack 4 videos as in the code below and add a loop for top_left.mp4 that is shorter for example.
I can't find a way to add the loop option without getting errors.
Could you help me please?
ffmpeg -i top_left.mp4 -i top_right.mp4 -i bottom_left.mp4 -i bottom_right.mp4 \
-lavfi "[0:v][1:v]hstack[top];[2:v][3:v]hstack[bottom];[top][bottom]vstack" \
2by2grid.mp4
Use -stream_loop -1 and add shortest=1 to the first hstack:
ffmpeg -stream_loop -1 -i top_left.mp4 -i top_right.mp4 -i bottom_left.mp4 -i bottom_right.mp4 -lavfi "[0:v][1:v]hstack=shortest=1[top];[2:v][3:v]hstack[bottom];[top][bottom]vstack" 2by2grid.mp4
xstack version:
ffmpeg -stream_loop -1 -i top_left.mp4 -i top_right.mp4 -i bottom_left.mp4 -i bottom_right.mp4 -lavfi "[0][1][2][3]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0:shortest=1" 2by2grid.mp4

Slurm Array Job: output file on same node possible?

I have a computing cluster with four nodes A, B, C and D and Slurm Version 17.11.7. I am struggling with Slurm array jobs. I have the following bash script:
#!/bin/bash -l
#SBATCH --job-name testjob
#SBATCH --output output_%A_%a.txt
#SBATCH --error error_%A_%a.txt
#SBATCH --nodes=1
#SBATCH --time=10:00
#SBATCH --mem-per-cpu=50000
FOLDER=/home/user/slurm_array_jobs/
mkdir -p $FOLDER
cd ${FOLDER}
echo $SLURM_ARRAY_TASK_ID > ${SLURM_ARRAY_TASK_ID}
The script generates the following files:
output_*txt,
error_*txt,
files named according to ${SLURM_ARRAY_TASK_ID}
I run the bash script on my computing cluster node A as follows
sbatch --array=1-500 example_job.sh
The 500 jobs are distributed among nodes A-D. Also, the output files are stored on the nodes A-D, where the corresponding array job has run. In this case, for example, approximately 125 "output_" files are separately stored on A, B, C and D.
Is there a way to store all output files on the node where I submit the script, in this case, on node A? That is, I like to store all 500 "output_" files on node A.
Slurm does not handle input/output files transfer and assumes that the current working directory is a network filesystem such as for instance NFS for the simplest case. But GlusterFS, BeeGFS, or Lustre are other popular choices for Slurm.
Use an epilog script to copy the outputs back to where the script was submitted, then delete them.
Add to slurm.conf:
Epilog=/etc/slurm-llnl/slurm.epilog
The slurm.epilog script does the copying (make this executable by chmod +x):
#!/bin/bash
userId=`scontrol show job ${SLURM_JOB_ID} | grep -i UserId | cut -f2 -d '=' | grep -i -o ^[^\(]*`
stdOut=`scontrol show job ${SLURM_JOB_ID} | grep -i StdOut | cut -f2 -d '='`
stdErr=`scontrol show job ${SLURM_JOB_ID} | grep -i StdErr | cut -f2 -d '='`
host=`scontrol show job ${SLURM_JOB_ID} | grep -i AllocNode | cut -f3 -d '=' | cut -f1 -d ':'`
hostDir=`scontrol show job ${SLURM_JOB_ID} | grep -i Command | cut -f2 -d '=' | xargs dirname`
hostPath=$host:$hostDir/
runuser -l $userId -c "scp $stdOut $stdErr $hostPath"
rm -rf $stdOut
rm -rf $stdErr
(Switching from PBS to Slurm without NFS or similar shared directories is a pain.)

Heredoc commands for find . -exec sh {} +

I'm trying to convert a hierarcy of TIFF image files into JPG, and out of boredom, I want to do find and ffmpeg in a single file.
So I set find to invoke sh with the -s flag, like thins:
#!/bin/sh
export IFS=""
find "$#" -iname 'PROC????.tif' -exec sh -s {} + << \EOF
for t ; do
ffmpeg -y -v quiet -i $t -c:v mjpeg ${t%.*}.jpg
rm $t
done
EOF
However, there's just too many files in the directory hierarchy, and find chopped filename array into several small pieces, and sh -s was only succesfully invoked for the first argument chunk.
The question being: how could one feed such in-body command to every sh invocation in the find command?
Update
The tag "heredoc" on the question is intended for receiving answers that do not rely on external file or self-referencing through $0. It is also intended that no filename would go through string-array processing such as padding with NUL-terminator or newline, and can be directly passed as arguments.
The heredoc is being used as the input to find. I think your best bet is to not use a heredoc at all, but just use a string:
#!/bin/sh
find "$#" -iname 'PROC????.tif' -exec sh -c '
for t ; do
ffmpeg -y -v quiet -i "$t" -c:v mjpeg "${t%.*}.jpg" &&
rm "$t"
done
' sh {} +
I am re-writing your code below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=${imgfile%.*}.jpg
cd "$path"
ffmpeg -y -v quiet -i "$imgfile" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
IF you don't want to change the current directory you can do it like below:-
#!/bin/bash
find "$1" -name "PROC????.tif" > /tmp/imagefile_list.txt
# If you list down all the .tif file use below command
# find "$1" -name "*.tif" > /tmp/imagefile_list.txt
while read filepath
do
path=${filepath%/*}
imgfile=${filepath##*/}
jpgFile=$path${imgfile%.*}.jpg
ffmpeg -y -v quiet -i "$filepath" -c:v mjpeg "$jpgFile"
rm -rf $imgfile
done < /tmp/imagefile_list.txt
rm -rf /tmp/imagefile_list.txt

How do I compress a .img file?

I have build an image using openwrt for imx6dl based custom board.
I made the image using uboot.img zImage dtb and root-imx6 files using this method:
dd if=/dev/zero of="$IMAGE_DIRPATH/boot_part.img" bs=1M count=4
parted "$IMAGE_DIRPATH/boot_part.img" mklabel msdos
mkfs.vfat "$IMGAE_DIRPATH"/boot_part.img
mcopy -i "$IMAGE_DIRPATH/boot_part.img" "$IMAGE_DIRPATH/openwrt-zImage " ::zImage
mcopy -i "$IMAGE_DIRPATH/boot_part.img" "$IMAGE_DIRPATH/imx6.dtb" ::imx6.dtb
make_ext4fs -l 125217728 -b 4096 -i 6000 -m 0 "$IMAGE_DIRPATH/root_overlay.img" "$ROOT_DIR"
dd if=/dev/zero of=Final.img bs=1M count=142
parted Final.img mklabel msdos
dd if=uboot.img of=Final.img bs=1K seek=1 conv=notrunc
parted Final.img mkpart p fat32 1 6
parted Final.img mkpart p ext4 8 140
dd if="$IMAGE_DIRPATH"/boot_part.img of=Final.img bs=1M seek=1 conv=notrunc
dd if="$IMAGE_DIRPATH"/root_overlay.img of=Final.img bs=1M seek=8 conv=notrunc
With this I'm able to generate an image of 142MB which is huge. I want generate a compressed image file less than 40MB. Any ideas?
Also suggest if by using other file systems I can get a better compression and can load on SD/eMMC cards.

Resources