Apologies for the title; I'm on a Mac and I'm trying to encode a zip file so that the file system will be like that of a Windows machine. Is this possible? If so, how?
For example:
I'm currently using 7z on Windows and on Mac to zip up the same files into the same archive. I then use, for example, WinMerge to compare the zips and it's clear that they're both different yet they're encoded with the same (ported) program but on different systems.
Here's a screenshot: WinMerge of the two zips http://f.cl.ly/items/2B2t2E2X1f2L3q082m3B/Untitled.png
And here are two zips, one encoded on Windows: http://cl.ly/1O06420e0H1M and here's the OSX one: http://cl.ly/2H0X2P2i0f3P
There is no file system in zip files (or to put it another way, zip files have their own, portable, zip filesystem).
There could be character encoding issues if you use non-ASCII file names, but that's another story.
Related
I have a chrome extension that works with my exe file. I want to deliver just one exe file to my client. I tried converting the zip file into hex, but then I get a string with 25 thousand lines. I don't think that's the right way to do it.
How can I deliver my zip file with my exe?
What you are trying to do is definitely doable, I've seen it being done many times.
If you have MinGW installed, you can use xxd tool that will do the trick for you.
xxd -i your_zip_filename embedded_zip_data.h
Now you simply add #include "embedded_zip_data.h" in your source code and it will be right there in the application data.
I have written a Java program to zip files and to unzip them in an unzipping C program called Junzip.
https://github.com/jokkebk/JUnzip. I'm able to unzip the file using 7zip file extractor. But when using C Junzip, its not unzipping.
But when I'm unzipping the file, which was zipped using normal file compressor, I'm able to unzip it using the same jUnzip library.
Author of the JUnzip library here, just came across this question. Did I understand correctly that .zip files made by your program unzip correctly with 7zip, but not with JUnzip, whereas JUnzip seems to be able to uncompress .zip files made by others?
Without further info, it's hard to say if there's a .zip feature you are using that JUnzip does not support, or if there's a bug in JUnzip. One possible reason is that the library and junzip.exe only supports a limited set of compression methods, namely Deflate that is supported by zlib library it uses. The code base is rather small so you could probably add a few debug statements to see where it goes wrong.
You can check out https://codeandlife.com/2014/01/01/unzip-library-for-c/ for some details regarding JUnzip.
I am confused a bit about .bin files. Basically in Linux we use elf, .ko type of files for upgrading the box or to copy in it . But, while upgrading a NAND flash in router or any Networking Gaint products why always .bin files is preferred. Is this is something like converged mix of all the OS related files. Does it possible to see the contents of a bin file. How to play with it. It is something like contents of BootROM. How is is prepared? How do we create and test on this. How Linux support for this. Any historical reasons behind this?
Speaking about routers, those files are usually just snapshots of a router's flash memory, probably compressed and with some headers added. Typical things are a compressed squashfs image or simply gzip'ed snapshot of memory.
There is no such thing as .bin format, it's just a custom array of bytes and every vendor interprets it in some vendor-specific way. Basically this extension means “it's not your business what's in the file, our device/software will handle it”. You can try to identify (thnk, reverse-engineer) what's actually in those files by using file utility or just looking at those files through a hex editor and trying to guess what's going on.
I am working at an OS independent file manager, and I divide files in groups, usually based on the extension. On Linux, I check if a file has the executable permissions or not, and if it does, I add it to the executables group.
This works great for Windows or Linux, but if you combine them it doesn't work so well. For example, while using it on Linux and exploring a windows mounted drive, all the files appear to be executable. I am trying to find a way to ignore those files and not add them to the executables group.
My code (on Linux) uses stat:
#ifndef WINDOWS
stat(ep->d_name, &buf);
....
if(!files_list[i].is_dir && buf.st_mode & 0111)
files_list[i].is_exe=1;
#endif
The first part of the answer is to find what filesystem the file is mounted on. To do that you need to find the filesystem using the st_dev field of the stat information for the file. (You can also do this by checking the file path, but you then have to check every path element for symbolic links).
You can then cross-reference the st_dev field with the mount table in /proc/mounts using getmntent_r(). There's an example of that in a previous answer. The mnt_type field will give you the text of the filesystem type, and you'll need to compare the string with a list of Windows filesystems.
Once you've found the filesystem, the only way to identify an executable is by heuristics. As other people have suggested, you can look at the file extension for Windows executables, and look at the initial bytes of the file for Linux executables. Don't forget executable scripts with the #! prefix, and you may need to read into a Jar file to find out if it contains an executable static main() method.
If you are browsing Windows files then you need to apply Windows rules for whether or not a file is executable. If the file extension is .EXE, .COM, .BAT, or .CMD then it is executable. If you want a more complete list then you should check MSDN. Note that it is possible to add registry entries on a machine that makes any extension you want to be considered executable, but it is best to ignore that kind of thing when you are browsing a drive from the network.
The fact is that you are fighting an uphill battle. The reason all the files have executable permissions is that the windows filesystem driver on Linux allows you to specify that as an option. This masks whether or not any files are Linux exceutables, for instance.
However, you could look into the file header for EVERY file and see if it is a Linux ELF executable (just like the Linux file command does).
It might be helpful to start by checking all the information about mounted filesystems so that you know what you are dealing with. For instance, do you have a CIFS filesystem mounted that is actually a Linux filesystem served up by SAMBA? If you enumerate every bit of information available about the mounted filesystem plus the complete set of stat info, you can probably identify combinations that act as fingerprints of the different scenarios.
Another option I could imagine, is to call the file util, and depend on its output (maybe its enough to grep for the words executable / script). This util exist/is compileable for windows (basically it just checks for some magic bytes in the files), too.
I have a database file with .DB file extension. I have been googling and it looks like SQLite. I tried to connect to it using SQLite and SQLite3 drivers and I am getting an error "File is encrypted or not a database".
So I dont know if file is encrypted or it is not an SQLite database. Are there any other options what should the .DB extension should be? How do I find out that file is encrypted?
I tried to open it in the text editor and it is mostly a mess of charaters and some times there are words visible. I have uploaded the file here: http://cl.ly/3k0E01373r3v182a3p1o for the closer look.
Thank you for your hints and ideas what to do and how to work with this file.
Marco Pontello's TrID is a great way to determine the type of any file.
TrID is simple to use. Just run TrID and point it to the file to be analyzed. The file will be read and compared with the definitions in the database. Results are presented in order of highest probability.
Just download the executable and the latest definitions file into the same directory and then run TrID:
trid.exe "path/to/file.xyz"
It will output a list of possible file types for the file with a confidence rating. Here's a screenshot of using TrID to analyze a SQLite database file:
There's also a GUI version called TrIDNet:
If you're on a Unix-like platform (Mac OS X, Linux, etc), you could try running file myfile.db to see if that can figure out what type of file it is. The file utility will inspect the beginning of the file, looking for any clues like magic numbers, headers, and so on to determine the type of the file.
Look at the first 30 bytes of the file (open it in Notepad, Notepad++ or another simple text viewer). There's usually some kind of tag or extension name in there.
Both SQLite 2 and SQLite 3 have a very clear message: SQLite format 3 for SQLite 3 (obviously) and This file contains an SQLite 2.1 database for SQLite 2.
Note that encrypted SQLite databases don't have a header like that since the entire file is encrypted. See siyw's comment below.
On a Unix-like system (or Cygwin under Windows), the strings utility will search a file for strings, and print them to stdout. Might help you narrow the field.
There are a lot of programs besides database programs that use a "db" extension, including
ArcView Object Database File (ESRI)
MultiEdit
Netscape
Palm
and so on. Google "file extensions" for some sites that catalog file extensions and the programs that use them.
There's no conclusive way to know, because SQLite encrypts the entire database file, including the header.
Further, there's not a lot of difference to you, except for possible error text to a user if you're prompting them for a password.