I have an application that writes files to an external drive formatted as NTFS through the SATA interface.
Before closing the application I make sure that everything is flushed using FlushFileBuffers for each file (i.e. CreateFile, FlushFileBuffers, CloseHandle). Then I remove the drive, without first un-mounting it!
This seems to work fine when plugging the drive back into PC machines. However, when plugging it into a OS X the OS doesn't seem to find any files unless the drive was properly unmounted.
What could be missing from the disk which causes OS X not to find anything and is there a way I can flush that data without having to unmount the drive?
EDIT:
With exfat I have the problem with "dirty" drives not being writable when re-mounted.
You might want to read this Ronag, this might help you:
http://www.yourdailymac.net/2011/06/how-to-read-and-write-ntfs-harddrives-under-mac-os-x/
Snippet...
You might already know that it isn’t possible to read Windows drives that are formatted with the NTFS file structure by default under Mac OS X. The driver that is implemented into OS X is simply not capable of writing to NTFS formatted drives, it is likely that this has something to do with a commercial interest.
It is however quite annoying for a user that wants to exchange files from a Windows NTFS drive. That’s why several commercials applications were developed, but most of them cost money like Paragon NTFS for Mac, but there is also a free and even better solution available.
EDIT - I've read the following, this might aid with remounting NTFS drives etc - I have to admit to only using the MAC every now and again. - hopefully I'm not running off in the wrong direction for you ....
Here's the post I've found on the apple forums -
I have created a script to initialize NTFS HDisks and use them in write mode just by using the native OSX driver, without third party software). It seems to work also under Mavericks. You can download it from:
http://sourceforge.net/projects/native-ntfs-osx/files/
You only need to run it once for each new ntfs disk. Next time you plug a ntfs disk that was alreadyinitialized with my script, the disk will be automatically mounted (however, it will not be displayed in the desktop, you will have to open it from /Volumes).
It is also important that the HD has been safely removed, since NTFS contains a flag to notice if the disk was safely removed or not, not allowing to mount it in write mode with the native OSX driver (something similar happens under Linux). In case it happens you just need to plug it into a windows PC and safely remove the HD (so it cleans that flag).
and for your reference the bash script -
#!/bin/bash
checkExisting(){
echo "Checking if already existing device on file..."
while read fileLine; do
if [ "$line" = "$fileLine" ]; then
echo "[WARNING] Device already initialized on this system. Nothing to do here"
open "$FILENAME"
exit 0;
fi
done < /etc/fstab
}
addLine(){
uuid=$(diskutil info "$FILENAME" | grep UUID | cut -d ':' -f2 | tr -d ' ')
volumeName=$(diskutil info "$FILENAME" | grep "Volume Name" | cut -d ':' -f2 | tr -d ' ')
if [ "$uuid" = "" ]; then
line="LABEL=$volumeName none ntfs rw,auto,nobrowse";
else
line="UUID=$uuid none ntfs rw,auto,nobrowse";
fi
checkExisting;
echo "# New NTFS HD: $volumeName on $(date) " >> /etc/fstab
echo $line >> /etc/fstab
device=$(diskutil info "$FILENAME" | grep "Device Node" | cut -d ':' -f2 | tr -d ' ')
diskutil unmount "$FILENAME"
diskutil mount $device
open $FILENAME;
exit 0;
}
checkDisk(){
filetype=$(diskutil info "$FILENAME" | grep "Type (Bundle):" | cut -d ':' -f2 | tr -d ' ')
#echo $filetype
if [ "$filetype" = "ntfs" ]; then
addLine;
fi
if [ "$filetype" = "" ]; then
echo "Error. Please, select a NTFS device"
fi
}
#Check sudo
if [[ $(/usr/bin/id -u) -ne 0 ]]; then
echo "This script should be run as ROOT. Try sudo"
exit
fi
echo "___________________________________"
echo "RubeniumTB. 2013 --ruben80(at)gmail.com--"
echo ""
echo "Initialize a NTFS Hard Disk on this system to read and write"
echo "Next time you won't need to initialize it again. Just plug and open but"
echo "take into account that:"
echo ""
echo "* Configured disks will not be auto-opened!!"
echo "* You will need to open /Volumes and click on your disk!!"
echo ""
echo "* Although it should not happen anything wrong, use at your own risk"
echo ""
echo "* IMPORTANT!!. Be sure that the NTFS device has been safely removed or it won't"
echo "be mounted in write mode. In this case you can connect it again to any windows PC,"
echo "remove safely, and then connect to your MAC"
echo ""
echo "* Also IMPORTANT!!. To avoid problems use SHORT names for the Volume names, "
echo "NO SPACES, and preferably only letters/numbers. Of course no special characters!!"
echo ""
echo "Now you are ready...."
echo "SELECT a NTFS Disk to initialize on this system"
echo "Write quit to exit"
echo ""
select FILENAME in "/Volumes"/*
do
case "$FILENAME" in
"$QUIT")
echo "Exiting."
break
;;
*)
echo "You picked "$FILENAME" "
checkDisk;
;;
esac
done
Related
I have a embedded C static library that communicates with a hardware peripheral. It currently does not support multiple hardware peripherals, but I need to interface to a second one. I do not care about code footprint rightnow. So I want to duplicate that library; one for each hardware.
This of course, will result in symbol collision. A good method is to use objcopy to add a prefix to object files. So I can get hw1_fun1.o and hw2_fun1.o. This post illustrates it.
I want to add a prefix to all c functions on the source level, not the object. Because I will need to modify a little bit for hw2.
Is there any script, c-preprocessor, tool that can make something like:
./scriptme --prefix=hw2 ~/src/ ~/dest/
I'll be grateful :)
I wrote a simple bash script that does the required function, or sort of. I hope it help someone one day.
#!/bin/sh
DIR_BIN=bin/ext/lwIP/
DIR_SRC=src/ext/lwIP/
DIR_DST=src/hw2_lwIP/
CMD_NM=mb-nm
[ -d $DIR_DST ] || ( echo "Destination directory does not exist!" && exit 1 );
cp -r $DIR_SRC/* $DIR_DST/
chmod -R 755 $DIR_DST # cygwin issue with Windows7
sync # file permissions. (Pure MS shit!)
funs=`find $DIR_BIN -name *.o | xargs $CMD_NM | grep " R \| T " | awk '{print $3}'`
echo "Found $(echo $funs | wc -w) functions, processing:"
for fun in $funs;
do
echo " $fun";
find $DIR_DST -type f -exec sed -i "s/$fun/hw2_$fun/g" {} \;
done;
echo "Done! Now change includes and compile your project ;-)"
I've been trying to figure this one out for a while. I've read through multiple threads, and feel like I'm close, but the script just isn't coming together.
Scenario:
I have a media server and thousands of movie files. Each movie file has various accessory files such as the Cover artwork, Database info, Fanart, and trailer. While everything in the directory has it's coverart and database info, some files may or may not have their respective fanart or trailer. For these files I'm trying to get this script working which will create an empty "dummy" file in place of the file that should be there. Then when I actually have the time I can go back and search out just the dummy files and work to fill in the gaps where I can.
Here is what I have so far.
#!/bin/bash
find . -type f -print0 | while read -d $'\0' movie ;
do
echo $movie
moviename=${movie%\.*} #remove the extension from the string
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
echo $moviename1 #echo the string (for debugging)
if [ ! -f $moviename-fanart* ]; #because the fanart could be .jpg, or .png, etc
then
echo "Creating $moviename-fanart.dummy"
touch "$moviename-fanart.dummy"
fi
if [ ! -f $moviename-trailer* ]; #because tralers could be .mp4, .mov, .mkv, .avi, etc
then
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
done
This should be pretty simple, but I think that I'm not getting the proper formating for the input string going into the test operators.
Any help would be greatly appreciated.
Thanks
Line-by-line analysis:
find . -type f -print0 | while read -d $'\0' movie; do
OK, but with bash4 you can just use shopt -s globstar to operate recursively on a directory.
moviename=${movie%\.*} #remove the extension from the string
You don't need the backslash.
moviename1=`echo $moviename | sed 's/\ /\\ /'` #add escaped spaces to the string
This line is suspect because if you quote the name, escaped spaces become doubly-escaped. You're confusing the value of the string with the representation you see of it.
if [ ! -f $moviename-fanart* ]; then #because the fanart could be .jpg, or .png, etc
Quote the string or use bash's [[ test keyword. It's a little dangerous to expand a glob inside the test expression because if it matches multiple results you'll get an error. That said, if you're sure there can be only one you can quote up to the glob. "$moviename-fanart"*.
touch "$moviename-fanart.dummy"
Here, you quote it. So essentially you're dealing with a different name now.
fi
if [ ! -f $moviename-trailer* ]; then #because tralers could be .mp4, .mov, .mkv, .avi, etc
echo "Creating $moviename-trailer.dummy"
touch "$moviename-trailer.dummy"
fi
Same thing.
done
How can I append the following code to the end of numerous php files in a directory and its sub directory:
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
I have tried with:
echo "my text" >> *.php
But the terminal displays the error:
bash : *.php: ambiguous redirect
I usually use tee because I think it looks a little cleaner and it generally fits on one line.
echo "my text" | tee -a *.php
You don't specify the shell, you could try the foreach command. Under tcsh (and I'm sure a very similar version is available for bash) you can say something like interactively:
foreach i (*.php)
foreach> echo "my text" >> $i
foreach> end
$i will take on the name of each file each time through the loop.
As always, when doing operations on a large number of files, it's probably a good idea to test them in a small directory with sample files to make sure it works as expected.
Oops .. bash in error message (I'll tag your question with it). The equivalent loop would be
for i in *.php
do
echo "my text" >> $i
done
If you want to cover multiple directories below the one where you are you can specify
*/*.php
rather than *.php
BashFAQ/056 does a decent job of explaining why what you tried doesn't work. Have a look.
Since you're using bash (according to your error), the for command is your friend.
for filename in *.php; do
echo "text" >> "$filename"
done
If you'd like to pull "text" from a file, you could instead do this:
for filename in *.php; do
cat /path/to/sourcefile >> "$filename"
done
Now ... you might have files in subdirectories. If so, you could use the find command to find and process them:
find . -name "*.php" -type f -exec sh -c "cat /path/to/sourcefile >> {}" \;
The find command identifies what files using conditions like -name and -type, then the -exec command runs basically the same thing I showed you in the previous "for" loop. The final \; indicates to find that this is the end of arguments to the -exec option.
You can man find for lots more details about this.
The find command is portable and is generally recommended for this kind of activity especially if you want your solution to be portable to other systems. But since you're currently using bash, you may also be able to handle subdirectories using bash's globstar option:
shopt -s globstar
for filename in **/*.php; do
cat /path/to/sourcefile >> "$filename"
done
You can man bash and search for "globstar" for more details about this. This option requires bash version 4 or higher.
NOTE: You may have other problems with what you're doing. PHP scripts don't need to end with a ?>, so you might be adding HTML that the script will try to interpret as PHP code.
You can use sed combined with find. Assume your project tree is
/MyProject/
/MyProject/Page1/file.php
/MyProject/Page2/file.php
etc.
Save the code you want to append on /MyProject/. Call it append.txt
From /MyProject/ run:
find . -name "*.php" -print | xargs sed -i '$r append.txt'
Explain:
find does as it is, it looks for all .php, including subdirectories
xargs will pass (i.e. run) sed for all .php that have just been found
sed will do the appending. '$r append.txt' means go to the end of the file ($) and write (paste) whatever is in append.txt there. Don't forget -i otherwise it will just print out the appended file and not save it.
Source: http://www.grymoire.com/unix/Sed.html#uh-37
You can do (Work even if there's space in your file path) :
#!/bin/bash
# Create a tempory file named /tmp/end_of_my_php.txt
cat << EOF > /tmp/end_of_my_php.txt
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
EOF
find . -type f -name "*.php" | while read the_file
do
echo "Processing $the_file"
#cp "$the_file" "${the_file}.bak" # Uncomment if you want to save a backup of your file
cat /tmp/end_of_my_php.txt >> "$the_file"
done
echo
echo done
PS: You must run the script from the directory you want to browse
Inspired from #Dantastic answer :
echo "my text" | tee -a file1.txt | tee -a file2.txt
I am trying to write a shell script to check database connectivity. Within my script I am using the command
sqlplus uid/pwd#database-schemaname
to connect to my Oracle database.
Now I want to save the output generated by this command (before it drops to SQL prompt) in a temp file and then grep / find the string "Connected to" from that file to see if the connectivity is fine or not.
Can anyone please help me to catch the output and get out of that prompt and test whether connectivity is fine?
Use a script like this:
#!/bin/sh
echo "exit" | sqlplus -L uid/pwd#dbname | grep Connected > /dev/null
if [ $? -eq 0 ]
then
echo "OK"
else
echo "NOT OK"
fi
echo "exit" assures that your program exits immediately (this gets piped to sqlplus).
-L assures that sqlplus won't ask for password if credentials are not ok (which would make it get stuck as well).
(> /dev/null just hides output from grep, which we don't need because the results are accessed via $? in this case)
You can avoid the SQL prompt by doing:
sqlplus uid/pwd#database-schemaname < /dev/null
SqlPlus exits immediately.
Now just grep the output of the above as:
if sqlplus uid/pwd#database-schemaname < /dev/null | grep 'Connected to'; then
# have connectivity to Oracle
else
# No connectivity
fi
#! /bin/sh
if echo "exit;" | sqlplus UID/PWD#database-schemaname 2>&1 | grep -q "Connected to"
then echo connected OK
else echo connection FAIL
fi
Not knowing whether the "Connected to" message is put to standard output or standard error, this checks both. "qrep -q" instead of "grep... >/dev/null" assumes Linux.
#!/bin/bash
output=`sqlplus -s "user/pass#POLIGON.TEST " <<EOF
set heading off feedback off verify off
select distinct machine from v\\$session;
exit
EOF
`
echo $output
if [[ $output =~ ERROR ]]; then
echo "ERROR"
else
echo "OK"
fi
Here's a good option which does not expose the password on the command line
#!/bin/bash
CONNECT_STRING=<USERNAME>/<PASS>#<SID>
sqlplus -s -L /NOLOG <<EOF
whenever sqlerror exit 1
whenever oserror exit 1
CONNECT $CONNECT_STRING
exit
EOF
SQLPLUS_RC=$?
echo "RC=$SQLPLUS_RC"
[ $SQLPLUS_RC -eq 0 ] && echo "Connected successfully"
[ $SQLPLUS_RC -ne 0 ] && echo "Failed to connect"
exit SQLPLUS_RC
none of the proposed solutions works for me, as my script is executed in machines running several countries, with different locales, I can't simply check for one String simply because this string in the other machine is translated to a different language. As a solution I'm using SQLcl
https://www.oracle.com/database/technologies/appdev/sqlcl.html
which is compatible with all sql*plus scripts and allow you to test the database connectivity like this:
echo "disconnect" | sql -L $DB_CONNECTION_STRING > /dev/null || fail "cannot check connectivity with the database, check your settings"
#!/bin/sh
echo "exit" | sqlplus -S -L uid/pwd#dbname
if [ $? -eq 0 ]
then
echo "OK"
else
echo "NOT OK"
fi
For connection validation -S would be sufficient.
The "silent" mode doesn't prevent terminal output. All it does is:
-S Sets silent mode which suppresses the display of
the SQL*Plus banner, prompts, and echoing of
commands.
If you want to suppress all terminal output, then you'll need to do something like:
sqlplus ... > /dev/null 2>&1
This was my one-liner for docker container to wait until DB is ready:
until sqlplus -s sys/Oracle18#oracledbxe/XE as sysdba <<< "SELECT 13376411 FROM DUAL; exit;" | grep "13376411"; do echo "Could not connect to oracle... sleep for a while"; sleep 3; done
And the same in multiple lines:
until sqlplus -s sys/Oracle18#oracledbxe/XE as sysdba <<< "SELECT 13376411 FROM DUAL; exit;" | grep "13376411";
do
echo "Could not connect to oracle... sleep for a while";
sleep 3;
done
So it basically does select with magic number and checks that correct number was actually returned.
I want to programmatically create a SHA1 checksum of audio files (MP3, Ogg Vorbis, Flac).
The requirement is that the checksum should be stable even if the header (eg. ID3) changes.
Note: The audio files don't have CRCs
This is what I tried by now:
1) Reading + Hashing all MPEG frames using Perl and MPEG::Audio::Frame
my $sha1 = Digest::SHA1->new;
while (my $frame = MPEG::Audio::Frame->read(\*FH)) {
$sha1->add($frame->content());
}
2) Decoding + Hashing all MPEG frames using Python and libmad (pymad)
mf = mad.MadFile(path)
sha1 = hashlib.sha1()
while 1:
buf = mf.read()
if (buf is None):
break
sha1.update(buf)
3) Using mp3cat
> mp3cat - - < file.mp3 | sha1sum
However, none of those methods provided a stable checksum. Namely, in some cases the checksum changed after retagging the file with picard.
Are there any libraries that already provide what I want?
I don't care about the programming language…
Update:
I debugged the case a bit further.
The libmad checksum inconsitency seems to happen in cases where libmad gets some decoding errors, like "Huffman data overrun (0x0238)".
As this really happens on many of the mp3 files I'm not sure if it really indicates a broken file…
If you are looking for stable hashes for the actual music you might want to look at libOFA. Your current methods will give you different results because the formats can have embedded tags. Also if you want two different files with the same song to return the same hash you need to regard things like bitrate and sample frequencies.
libOFA on the other hand can give you a stable hash that can be used between formats and different encodings. Might be what you want?
I needed tools to quickly check if my MP3/OGG library is still valid.
For MP3 I found mp3md5.py (http://snipplr.com/view/4025/mp3-checksum-in-id3-tag/) which does the job, but no simple tool for OGG Vorbis, but I coded a little bash script to do this for me.
Both tools should tolerate modifications of the comment/ID3Tag.
#!/bin/bash
# This bash script appends an MD5SUM to the vorbiscomment and/or verifies it if it exists
# Later modification of the vorbis comment does not alter the MD5SUM
# Julian M.K.
FILE="$1"
if [[ ! -f "$FILE" || ! -r "$FILE" || ! -w "$FILE" ]] ; then
echo "File $FILE" does not exist or is not readable or writable
exit 1
fi
OLDCRC=`vorbiscomment "$FILE" | grep ^CRC=|cut -d "=" -f 2`
NEWCRC=`ogginfo "$FILE" |grep "Total data length:" |cut -d ":" -f 2 | md5sum |cut -d " " -f 1`
if [[ "$OLDCRC" == "" ]] ; then
echo "ADDED $FILE $NEWCRC"
vorbiscomment -a -t "CRC=$NEWCRC" "$FILE"
# rewrite CRC to get proper data length, I dont know why this is necessary
NEWCRC=`ogginfo "$FILE" |grep "Total data length:" |cut -d ":" -f 2 | md5sum |cut -d " " -f 1`
vorbiscomment -w -t "CRC=$NEWCRC" "$FILE"
elif [[ "$OLDCRC" == "$NEWCRC" ]] ; then
echo "VERIFIED $FILE"
else
echo "FAILURE $FILE -- $OLDCRC - $NEWCRC"
fi
There is an easy stable way to do it. Just make a copy of the file and remove all the tags from it (e.g. using mutagen.id3) and take the hashsum of the resulting file.
The only disadvantage of this method is its performance.
Bene, If I were you, (And I am in the process of working on something very similar to what you want to do), I would hash the mp3 data block. (Extract it to raw data first, and write it out to disk, so you know what you are dealing with). Then, modify the ID3 tag. Hash your data again. Now, if it changes, compare your two sets of raw data and find out WHERE it changed. Chances are, you might be over-stepping a boundary somewhere. If I recall, MP3 files start with something like FF F8. Well, at least the frame does.
I'm interested in your findings, as I'm still writing all my code to deal with the finger prints, etc, and haven't gotten to the actual hashing yet.
Update many years later:
See my answer here to a very similar question. It turns out that ffmpeg actually supports doing checksums of the individual streams. To get the md5 hash of only the audio stream:
ffmpeg -i "$filename" -map 0:a -codec copy -f md5 "$filename.md5"
There is also support for other hash formats with the generic -f hash format, or for doing it per frame with -f framemd5.
I'm trying to do the same thing. I used MD5 instead of SHA1. I started to export audio checksums using mp3tag (www.mp3tag.de/en/); then made a Perl script similar to yours to do the same thing. Then I removed all tags from my test file, and the audio checksum remained the same.
This is the script:
use MPEG::Audio::Frame;
use Digest::MD5 qw(md5_hex);
use strict;
my $file = 'E:\Music\MP3\Russensoul\01 - 5nizza , Soldat (Russensoul - Russensoul).mp3';
my $mp3tag_audio_md5 = lc '2EDFBD62995A46A45CEEC08C1F303486';
my $md5 = Digest::MD5->new;
open(FILE, $file) or die "Cannot open $file : $!\n";
binmode FILE;
while(my $frame = MPEG::Audio::Frame->read(\*FILE)){
$md5->add($frame->asbin);
}
print '$md5->hexdigest : ', $md5->hexdigest, "\n",
'mp3tag_audio_md5 : ', $mp3tag_audio_md5, "\n",
;
Is it possible that whatever you use to modify your tags sometimes also modifies mp3 headers?