Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to construct a SDHC card (FAT32) with a directory where I have chosen the short and long filenames independently. E.g. short filename MYDIR but long name i am a cool name. yeah. check out the awesomeness. Based on Wikipedia, there is no mandatory correlation between the two names, so my goal should be possible:
there is no compulsory algorithm for creating the 8.3 name from an LFN
-- http://en.wikipedia.org/wiki/8.3_filename#Overview
I can use any necessary system to do this (windows, mac, linux, hex editor) but the easier the better. Thanks!
The short file name is automatically and compulsorily constructed from the LFN using the algorithm you mentioned. (Also detailed in the FAT32 specifications). This is done by the file-system driver (at least on Windows and Linux).You really can't change that, unless you modify the driver which is not advisable. If you want to do this only for one directory, then you could achieve this by modifying the disk image in a hex editor being wary of not creating duplicate entries with the same name.
Here is what I tried on Linux:
#dd if=/dev/zero of=fatImage bs=1048576 count=256
#mkfs.vfat -F 32 fatImage
#mount -o loop fatImage /mnt
#cd /mnt
#mkdir ThisIsALongDirectoryName
The fat driver generates a short name for the directory:THISIS~1.
You can use both names to access it.
#cd /mnt/ThisIsALongDirectoryName
#cd /mnt/THISIS~1
Then after unmounting the partition, I opened the image in a hex editor(Okteta on KDE), searched for the SFN entry THISIS~1,
and replaced it with MYNEWDIR. Also,each 32 byte LFN sub-entry stores a checksum of the SFN at offset 13.
So I had to calculate and replace the checksum of THISIS~1(which is 0xA6)
with the checksum for MYNEWDIR(which is 0x6A) in all the LFN sub-entries.After saving the modifications,I remounted the image and was able to access the directory using the old LFN and the new SFN.
#cd /mnt/ThisIsALongDirectoryName
#cd /mnt/MYNEWDIR
I wouldn't rely in Wikipedia as a technical reference. It's better to consult Microsoft's documentation. Reading up on this, I think there may be a relationship between the two files, so I would not recommend fiddling with these. You are probably better off using a short name.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
On my server, I have large files that have been split into smaller (binary) chunks. The chunks are stored with their number: chunk0, chunk1, chunk2...
The number is the position of the chunk sequentially in the large file.
They are stored in a directory that has the same name as the file they make up. (In actuality, the directory has the large-file's hash as its name).
[ chunk0 (1MB)
| chunk1 (1MB)
| chunk2 (1MB)
file1/ -| chunk3 ..
| chunk4 ..
| chunk5 ..
[ chunk6 ..
So file1 would be a 6 MB composite file.
The server is written in Go and so is the client.
I want to provide the user a way of downloading a large file without having to download each chunk separately.
Something along the lines of:
client sends a request to the server API for file1
server provides the full file1 data from the chunks
client downloads the file data to a single file
So the question is, how to do step 2? I would like to reassemble the chunks on the fly - when the user requests a certain file (without creating any new files). Another caveat is that it needs to be fast as possible because the application I am working on is oriented around speed.
Something like the functionality of a blob URL would be good I think? But this project is in Go and using a browser or JavaScript is not an option.
In general, you can easily concatenate files on the go (without having to do any on-disk file operations) by writing the contents of multiple io.Readers into a single io.Writer. The buffering behind these abstractions will take care of the rest for you.
The key things you need are
io.Copy
os.Open which returns an io.Reader
The fact that the http.ResponseWriter is an io.Writer
So, a general scheme for step 2 could be, in your http.Handler:
Set Content-Type header
Determine file chunk names in whole file
Check all chunks are accessible
For each chunk file in order
Open chunk file
Copy contents of file to w ResponseWriter
Close file
I added a pre check for file access there because you don't want to be in the middle of sending the file when you detect that you've an error condition. At that time it's too late to set an appropriate error code.
This operation will be completely IO-bound, either disk or network throughput will be the limiting factor, so a serial approach is likely to be as good as it gets (if you're considering a single server process in a single machine).
As stated in the comments by #Emile Pels, an io.MultiReader allows you to concatenate multiple Readers, so with that you can replace the entire for loop with:
Create a slice of opened files in order
Create io.MultiReader(files...)
io.Copy(w, mreader)
Close each open file
One downside I can think of with that would be that it forces you to open all the files and keep them open for the duration of the operation, which under high load, large file size and a high chunks-per-file factor could lead to your process exceeding it's open file descriptor limit.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am rewriting the problem since it's not clearly understood as far as I see. I implement my own shell in C which needs to support all commands the original one does.
The problem is to execute all existing UNIX bash commands in C without using execvp() or system() functions which already let you do that easily.
To do that, I need to search all required directories which may consist any kind of UNIX commands. I just want to know that:
Do I really be sure that I support all possible UNIX commands in any distribution when I checked all directories in my PATH environment variable? (which becomes /bin/, /usr/bin/, /usr/local/bin in my machine)
I have also found a method which gets the full directory of a file you inserted called realpath() . But unfortunately, it returns (null) when I attempt to get the directory of the command inserted in my own shell.
What else do you suggest me to achieve this problem? As a last solution, does it make sense to search whole computer recursively from the root to find the command inserted?
If there's something unclear, please let me know to clarify. I would be very thankful if you could answer with a piece of example code and clear [on hold] tag in the question if you think it's clearly described from now.
Thanks in advance!
It is true that a UNIX executable can be absolutely anywhere, but in the context of a homework assignment, it doesn't make sense to search the entire filesystem. What your instructor probably wants you to do is implement the functionality of execvp yourself, using execv. What execvp does is, first, it looks to see if there is a slash in the command name. If there is, it passes the command and arguments directly to execv - it doesn't search. Otherwise, it iterates over the directories in PATH and checks whether the command is an executable in each. Crucially, it does NOT scan the contents of each directory; not only would that be very slow, it wouldn't even work under some conditions (such as a directory with --x permissions) Instead, it blindly calls execv with the pathname "$dir/$cmd". If that works, execv doesn't return. If it didn't work, and errno is set to ENOENT, it goes on to try the next directory in the path.
First, note that realpath() doesn't search anything, it just determines the absolute path of a given file.
There is no all possible UNIX command as you may think. At least any executable file can be considered as UNIX command, and executables are not necessarily the ones that have x right attached to it. Shell scripts may be executed by command like sh myscript even if executable access is not granted on it. Only binaries necessitate to have that attached right to be executed natively. So there is no true criterion that can help you. But you may have files that have x right and that are not executables!
A common usage is that executables are located in some directories /bin, /usr/bin, /usr/local/bin, and alike. Your shell has an environnement variable named PATH that contains list of directories where to search for command you specified freely on command line.
Anyway, if you choose a criterion to make an exhaustive search by yourself, say all files with x right then you can use command find like find some_starting_dir -perm +0111 to get all files that have x right somewhere.
If you want to program it then you may use either legacy readdir() function or the newer nftw() to make your own directory traversal. You will find many example of these even on SO.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
here I have three ways to get an H264 file, like all forensic scientists, I am very curious about the differences between them:
1.
ffmpeg -i video.mp4 video.h264
2.
ffmpeg -i video.mp4 -vcodec copy -an -f h264 video.h264
3. Using the example "demuxing_decoding.c" provided on the ffmpeg official website:
http://ffmpeg.org/doxygen/trunk/demuxing_decoding_8c-example.html
Obviously, the first one does the transformation, and the second one does the demuxing. They render different H264 files which however have similar file sizes(in my case, it's about say 24 MB). Suprisingly, the third one, which is also supposed to do the demuxing job, renders an H264 file with 8.4 GB! Why?
What I wondered is really, how the interiors of these three methods work?(The third one is already in source code, therefore it's quite easy to have an insight) What about the first two commands? What APIs are called when executing these two commands and how those APIs are called(namely, in what kind of sequences they are called) and things like that.
One thing that is also important to me is, i have no idea how I can trace the execution routines of ffmpeg command lines. I want to see what's going on behind ffmpeg commands in source code version. Is it possible?
I appreciate any comment.
I've just got a NAS running ZFS and I'd like to preserve creation times when transferring files into it. Both linux/ext4 (where the data is now) and zfs store creation time or birth time. In the case of zfs it's even reported by the stat command. But I haven't been able to figure out how I can set the creation time of a file so it mirrors the creation time in the original file system. Unlike an ext4->ext4 transfer where I can feed debugfs a script to set the file creation times.
Is there a tool similar to debugfs for ZFS?
PS. To explain better:
I have a USB drive attached to a Ubuntu 14.04 laptop. It holds a file system where I care about the creation date (birth date) of the individual files. I consult these creation timestamps often using a script based on debugfs, which reports it as crtime.
I want to move the data to a NAS box running ZFS, but the methods I know (scp -p -r, rsync -a, and tar, among others I've tried) preserve the modification time but not the creation time.
If I were moving to another ext4 file system I would solve the problem using the fantastic tool debugfs. Specifically I can make a list of (filename, crtime) pairs on the source fs (file system), then use debugfs -w on the target fs to read a script with lines of the form
set_inode_field filename crtime <value>
I've tested this and it works just fine.
But my target fs is not ext4 but ZFS and although debugfs runs on the target machine, it is entirely useless there. It doesn't even recognize the fs. Another debug tool that lets you alter timestamps by editing an inode directly is fsdb; it too runs on the target machine, but again I can't seem to get it to recognize a ZFS file system.
I'm told by the folks who sold me the NAS box that debugfs and fsdb are not meant for ZFS filesystems, but they haven't been able to come up with an equivalent. So, after much googling and trying out things I finally decided to post a question here today, hoping someone might have the answer.
I'm surprised at how hard this is turning out to be. The question of how to replicate a dataset so all timestamps are identical seems quite natural from an archival point of view.
Indeed, neither fsdb nor debugfs are likely to be suitable for use with ZFS. What you might need to do instead is find an archive format that will the preserve crtime field that presumably is already set for the files on your fileserver. If there is a version of pax or another archiving tool for your system it may be able to do this (cf. the -pe "preserve everything" flag for pax which it seems in current versions does not preserve "everything" - viz. it does not preserve crtime/birth_time). You will likely have more success finding an archiving application that is "crtime aware" than trying set the creation times by hacking on the ZFS based FreeBSD system with what are likely to be rudimentary tools.
You may be able to find more advanced tools on OpenSolaris based systems like Illumos or SmartOS (e.g. mdb). Whether it would be possible to transfer your data to a ZFS dataset on one of those platforms and then, combining the tools they have with, say, dtrace in order to rewrite the crtime fields is more of a theoretical question. If it worked then you could export the pool and its datasets to FreeBSD - exporting a pool does seem to preserve the crtime time stamps. If you are able to preserve crtime while dumping your ext4 filesystem to a ZFSonLinux dataset on the same host (nb: I have not tested this) you could then use zfs send to transfer the whole filesystem to your NAS.
This core utils bug report may shed some light on the state of user and operating system level tools on Linux. Arguably the filesystem level crtime field of an inode should be difficult to change. While ZFS on FreeBSD "supports" crtime, the state of low level filesystem debugging tools on FreeBSD might not have kept pace in earlier releases (c.f. the zdb manual page). Are you sure you want to "set" (or reset) inode creation times? Or do you want to preserve them after they have been set on a system that already supports them?
On a FreeBSD system if you stat a file stored on a ZFS dataset you will often notice that the crtime field of the file is set to the same time as the ctime field. This is likely because the application that wrote the file did not have access to library and kernel functions required to set crtime at the time the file was "born" and its inode entries were created. There are examples of applications / libraries that try to preserve crtime at the application level such as libarchive(3) (see also: archive_entry_atime(3)) and gracefully handle inode creation if the archive is restored on a filesystem that does not support the crtime field. But that might not be relevant in your case.
As you might imagine, there are a lot of applications that write files to filesystems ... especially with Unix/POSIX systems where "everything is a file". I'm not sure if older applications would need to be modified or recompiled to support those fields, or whether they would pick them up transparently from the host system's C libraries. Applications being used on older FreeBSD releases or on a Linux system without ext4 could be made to run in compatibility mode on an up to date OS, for example, but whether they would properly handle the time fields is a good question.
For me running this little script as sh birthtime_test confirms that file creation times are "turned on" on my FreeBSD systems (all of which use ZFS post v28 i.e. with feature flags):
#!/bin/sh
#birthtime_test
uname -r
if [ -f new_born ] ; then rm -f new_born ; fi
touch new_born
sleep 3
touch -a new_born
sleep 3
echo "Hello from new_born at:" >> new_born
echo `date` >> new_born
sleep 3
chmod o+w new_born
stat -f "Name:%t%N
Born:%t%SB
Access:%t%Sa
Modify:%t%Sm
Change:%t%Sc" new_born
cat new_born
Output:
9.2-RELEASE-p10
Name: new_born
Born: May 7 12:38:35 2015
Access: May 7 12:38:38 2015
Modify: May 7 12:38:41 2015
Change: May 7 12:38:44 2015
Hello from new_born at:
Thu May 7 12:38:41 EDT 2015
(NB: the chmod operation "changes" but does not "modify" the file contents - this is what the echo command does by adding content to the file. See the touch manual page for explanations of the -m and -a flags).
This is the oldest FreeBSD release I have access to right now. I'd be curious to know how far back in the release cycle FreeBSD is able handle this (on ZFS or UFS2 file systems). I'm pretty sure this has been a feature for quite a while now. There are also OSX and Linux versions of ZFS that it would be useful to know about regarding this feature.
Just one more thing ...
Here is an especially nice feature for simple "forensics". Say we want to send our new_born file back to when time began, back to the leap second that never happened and when - in a moment of timeless time - Unix was born ... :-) 1. We can just change the date using touch -d and everyone will think new_born is old and wise, right?
Nope:
~/ % touch -d "1970-01-01T00:00:01" new_born
~/ % stat -f "Name:%t%N
Born:%t%SB
Access:%t%Sa
Modify:%t%Sm
Change:%t%Sc" new_born
Name: new_born
Born: May 7 12:38:35 2015
Access: Jan 1 00:00:01 1970
Modify: Jan 1 00:00:01 1970
Change: May 7 13:29:37 2015
It's always more truthful to actually be as young as you look :-)
Time and Unix - a subject both practical and poetic: after all, what is "change"; and what does it mean to "modify" or "create" something? Thanks for your great post Silvio - I hope it lives on and gathers useful answers.
You can improve and generalize your question if you can be more specific about your requirements for preserving, setting, archiving of file timestamp fields. Don't get me wrong: this is a very good question and it will continue to get up votes for a long time.
You might take a look at Dylan Leigh's presentation Forensic Timestamp Analysis of ZFS or even contact Dylan for clues on how to access crftime information.
[1] There was a legend that claimed in the beginning, seconds since long (SSL) ago was never less than date -u -j -f "%Y-%m-%d:%T" "1970-01-01:00:00:01" "+%s" because of a leap second ...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am using following code to create a new file cat15 using cat command in UNIX
# cat > cat15
this command adds a new file cat15 in root directory and whatever I type after this command is being stored into the file created. But I am not able to exit from this editor.
In other word, I am not getting Shell prompt symbol #
The cat command reads from STDIN if you don't specify a filename. It continues to do this until it receives an EOF or is killed. You can send an EOF and get your terminal back by typing <ctrl>+d.
What people generally do is to either use
touch filename
or
echo -n > filename
to create an empty file. As Charles correctly notes below, "echo -n" is not always a good idea (though you can usually count on it under "popular" Linux distros); I'd strongly suggest just using touch.
If you just want to create an empty file, regardless of whether one existed or not, you can just use ">" like this:
> cat15
It will clobber anything that already exists by that name.