PCI File System+RDBMS Auditing/Scan - pci-dss

Which (open source) tools are available for scanning systems like SQL Databases/File Systems for cardholder data? So far we've found PANBuster, 7seec and PANscan and are wondering whether there's more out there (preferrably open source).

Take a look at bulk_extractor, which can be downloaded from http://afflib.org. It will find credit card numbers, track 2 data, email addresses and other info. It will even find it in compressed and double-compressed files.

Related

Where filesystems store their file metadata

So I know that a file is composed by it's data and also metadata, which is information about it (usually the name, the type of the file, dates of creation and modification, etc.).
My question is where exactly is that information stored. I know it can be included inside the file, the directory or in a database, but for the Windows, Linux and MAC-OS file systems I can't seem to find this information...
Most of this information in the case of Windows and Mac is proprietary.
For Windows I can say for certain that a close enough version of the NTFS file system driver has been written for Linux. You can look into that, there's also some documentation most of it written by Richard 'Flatcap' Russon (http://www.flatcap.org/ntfs/).
Documentation on the FAT Filesystem has been made public a long time ago with the intent that it would provide ample information for developers and engineers working on flash drives and things like that. (http://msdn.microsoft.com/en-us/library/windows/hardware/gg463080.aspx)
Documentation on the Ext Filesystem used by Linux distributions can be found on the web easily. (Ext2 : http://www.nongnu.org/ext2-doc/ext2.html)
I have no clue what Mac uses but I bet it's some kind of proprietary abomination derived from an existing format (probably ext). This is just my opinion, do not take this as fact.
All these formats have some sort of structure that holds meta-data. The file is just a stream of bytes somewhere on the physical drive. Most filesystems should have a structure that stores at least the file's location (usually a starting cluster for each fragment of the file) and the file's size. The rest of meta-data is up to each file system to implement.
For example, the FAT Filesystem there are tables for each directory and each directory stores metadata about the files it contains. But it also has a FAT table that holds the fragment locations for each file contained in the filesystem.
The NTFS Filesystem has a big table called the Master File Table that holds a record of metadata for each file contained by the filesystem including the table itself. Each record holds all the metadata including the file location on the physical drive for each fragment.
However, the directory structure is held as data in directory file records. However, the NTFS has even more structures that hold information about the files such as the USN Journal or the Volume Bitmap.
To access the meta-data contained by the file system you either have to parse the raw volume or to use functions exposed by the operating system's API. The API doesn't generally give you all the information you want about the metadata. For example, Windows API will give you functions to iterate through the USN Journal to find information about a particular file, but you can't get the MFT attributes of a file directly.
Again, I have to stress out that even with most of the documentation on these proprietary file systems, you're taking shots in the dark since it's their intellectual property. Some if not most of the documentation that we have now comes from reverse engineering.
It depends on the filesystem. Take a look at http://lxr.free-electrons.com/source/fs/fat/fat.h for instance.
systems#Metadata show details on which file systems store information in metadata.

Adding Capability to NFS Server - Compressing/Decompressing Stored/Retrieved Files

I need to build a custom Suse Linux NFS Server that does compression on certain files that are stored on the disk, and decompresses files as they are read from the disk. This needs to be transparent to the remote users of the file system, meaning that if a user saves a 10MB file named XYZZY.tif on /archiveDirectoryOnNFSServer, that when they do a ls -l on that mounted directory, they will see a 10MB file called XYZZY.tif, even though the actual file stored on the disk on the NFS server will be XYZZY.tif.compressed, and it will be 2MB in size.
I'm expecting that I need to build this as a driver that sits below the NFS Server software stack, but, I'm having difficulty finding where to start. Are there existing NFS Servers that provide this level of customization through APIs? Will I need to modify source of an open source NFS Server, and, if so, is there one that would be easiest to start with, and are they modularly structured such that this will be straight forward? I'm having difficult locating relevant content on the internet, and any pointers will be greatly appreciated.
IMO that kind of functionality is absolutely not the NFS server's responsibility (an nfs server should, well, serve files over nfs), but the underlying filesystem's. However, there's not that much choice in Linuxland but you could start by checking out fusecompress and btrfs.
This post is a bit old so you may already be aware of some options here, but there are a couple others (both for server side).
http://zfsonlinux.org/
zfs filesystem has built-in compression. I typically use lzjb as it is the fastest compression algorithm and does a reasonable job (MySQL DB's get 2-4x compression, filesystems with non-compressed data get around 4). you have a choice of algorithm depending on how much CPU time you wish to offer the compression.
if you want different file types compressed then you may consider laying gluster on top of a set of zfs filesystems.
gluster will allow you to store certain file types (by extension) on different underlying filesystems.
in this case, you specify the underlying filesystem as a zfs volume with the particular options you need (for example, .zip and .png go on an uncompressed filesystem, while things you write once and read many like static html files might go on a higher compression--you'll pay once when it's written but reads should be really fast since it scans fewer disk blocks and decompression is very fast)
zfs will manage the nfs mounts if you use it as your nfs server--you wont want this if you lay gluster on top.
it's easy to specify dynamically other attributes per filesystem (atime/noatime, # of copies if you want redundancy other than your normal raid, you can add SSD's as cache devices to get more performance).
in these solutions, you still send the full uncompressed files over the wire, so it doesn't make up for network performance but gives a lot of options if you're trying to speed up Disk IO or get more utilization out of your drives.

SD Card usage is essential for "database.db" file in my application in blackberry?

I am working on Blackberry database dependent application. On click of button i just show some useful data on other screen by fetching data from .db file stored in my sd card. Initially I provide that ".db" file from my ASSETS.
Now, i have seen some users review, they are getting problem in using SD-Card.
My question is "Is is possible to use sql database/.db file without using SD-Card in my application in blackberry"
Please let me know if it is possible....!
There are two separated filesystems supported. The first - internal device filesystem, the second - memory sd card filesystem.
Internal device filesystem does not depend on memory sd card and it is possible to create file there. But note that if your database consumes all available internal memory then device becomes mad.
Internal memory is an important resource to support vital operating system activies and when there is a shortage of this kind of memory then weird things occur, like sudden restarts, freezing issues etc.
It's a little bit more complicated than just having write access to the filesystem. Only certain types of internal storage support SqLite. See BlackBerry SqLite db creation: “filesystem not ready”

Legacy dos system with flat file data store (ISAM-Files)

I have a legacy system which used to run on dos. It is an ERP system for retail stores (fashion). It think it stores it's data in flat files.
I have files ending with *.KEY and other files ending with *.D00 (counting up).
I think the key files hold the key informationen and the D-Files hold some data ... there are alot D77 files...
As far as my investigation concerns this is not dfb or foxpro it could proprietary...
The company who wrote it is out of business of course so no chance for support or any hints.
When I open these files in vim or other editors I get some binary signs and some text... I tryed it in hex mode but still nothing to use...
Is there any chance I can dump out the data... in csv, ascii, xml?
I am pretty sure that this is not a standard format. Can someone point me in a direction how those data were stored back in the days and how could I make them read-able...
Any tools, tips or tricks?
// EDIT
After some time I made some progress and can now post some details which I did not now of back then and made a good answer impossible.
I asume that the dos system was written in visual cobol and that the files could be b-tree files stored in ISAM format. I assume the closet thing I could provide is, that there is a possibility that the format is C-ISAM.
How can I access / view or modify these files... C#, JAVA, ruby.... everything new age language would be cool... I am not sure if I can handle cobol... It would be great to have a converter or a viewer tool preferable opensource...
Hope this clearifies more my question =)
OpenCOBOL has a very active user group. The language itself is free and runs on Linux and Windows and perhaps MacOSX. Have a chat to the user group there; they may be able to help.
Peachtree Accounting Software used those file extensions back in 1992.

Signable, streamable, "readable" archive format?

Is there any archive format that offers the following:
be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file)
be stream-able - be able to be opened even if not all of the content has been transferred (also not strictly linearly)
be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want)
be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support
be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike license so it doesn't allow commercial use, BSD on the other hand allows it)
Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded.
I am not interested in:
being able to be compressed
being supported on legacy systems
Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ?
If there is, are there programing libraries available for the above mentioned OSs?
If not, would it be hard to create such a thing?
Usage scenario:
I want to use this to create a new media container.
Current media containers contain the audio, video and subtitle streams directly.
Matroska, currently the most advanced container, has supplementary features like attachments and menus.
The menu functionality however is not implemented and very limited.
What I want to create is one level higher.
I want to create a file similar in a way to OOXML.
Also all of the menuing should be done in web technologies like HTML5 (as it is now the tag allows for any kind of codec to be used) and CSS.
Also just like you have holograms on dvds to prove the authenticity I want to create a sign-able file
Research notes:
Before asking this question I stumbled uppon this:
Whats the best way digitally sign a zip file for download using .Net
While detached signing would be feasable for the individual files contained in this archive it is not an ellegant solution for the archive file. Not end user friendly.End users should be able to doubleclick the file to open it in a media player like VLC, and see a message that the file is legit (just like you see in a browser if the page is transmitted with SSL through HTTPS or not)
EDIT: clarified point 5
EDIT 2: added a note to clarify point 1 and 2
EDIT 3: added usage scenario
EDIT 4: added research notes section
P.S.: This is my first question on StackOverflow
I doubt that you find such format out of the box. I understand how such solution can be built with help of our SolFS, but SolFS doesn't have built-in signing (you can add signing easily).

Resources