Obfuscate HTA files - obfuscation

The heading says it all. I have HTA files and I need to obfuscate them. There is no legal implication but i have a dangerous customer who can tweak the code and create havoc.

You can compress and obfuscate the JavaScript using one of the many JavaScript compressors. This won't prevent any determined attack on the code, but may prevent casual tweaking.

I'd use Microsoft Script Encoder for this purpose:
http://www.microsoft.com/download/en/details.aspx?DisplayLang=en&id=3375
Use it like this:
"Screnc /e htm test.hta test-encrypted.hta"
Make sure you don't forget to mark the .HTA-file with the start encoding tag:
'**Start Encode**

Our JavaScript obfuscator renames identifiers, strips comments and formatting, and can encrypt string literals.
As everyone points out, yes, a determined reverse engineer can do so, but the point is he really has to make an effort. If the application is big, he has to make a correspondingly big effort.
The link addresses why the Microsoft script encoder is a bad idea.

Related

Readable text in dissassembled code

is there any widely used procedure for hiding readable strings? After debugging my code i found a lot of plain text. I can use some simple encryption (Caesar cipher etc...) but this solution will totally slow down my code. Any ideas? Thanks for help
No, there is no widely used method for hiding referenced strings.
At some point an accessed string would have to be decrypted and this would reveal the key/method and your decryption becomes just obfuscation. If somebody wants to read all your referenced strings he could easily write some script to just convert them all to be readable.
I can't think of any reason to obfuscate strings like that. They are only visible to someone that analyses your executable. Those people would at the same time also be capable to reverse engineer your deobfuscation an apply it to all strings.
If secrecy of strings is vital to the security of your application, you have to rethink that.
Sidenote: There is no way that deciphering strings in C will slow down your application ...Except your application is full of strings and you do something very inefficient in the deciphering. Have you tested this?

Parsing: library functions, FSM, explode() or lex/yacc?

When I have to parse text (e.g. config files or other rather simple/descriptive languages), there are several solutions that come to my mind:
using library functions, e.g. strtok(), sscanf()
a finite state machine which processes one char at a time, tokenizing and parsing
using the explode() function I once wrote out of pure boredom
using lex/yacc (read: flex/bison) to generate an appropriate parser
I don't like the "library functions" approach. It feels clumsy and awkward. explode(), while it doesn't take much new code, feels even more blown up. And flex/bison often seems like sheer overkill.
I usually implement a FSM, but at the same time I already feel sorry for the poor guy that may have to maintain my code at a later point.
Hence my question:
What is the best way to parse relatively simple text files?
Does it matter at all?
Is there a commonly agreed-upon approach?
I'm going to break the rules a bit and answer your questions out of order.
Is there a commonly agreed-upon approach?
Absolutely not. IMHO the solution you choose should depend on (to name a few) your text, your timeframe, your experience, even your personality. If the text is simple enough to make flex and bison overkill, maybe C is itself overkill. Is it more important to be fast, or robust? Does it need to be maintained, or can it start quick and dirty? Are you a passionate C user, or can you be enticed away with the right language features? &c., &c.
Does it matter at all?
Again, this is something only you can answer. If you're working closely with a team of people, with particular skills and abilities, and the parser is important and needs to be maintained, it sure does matter! If you're writing something "out of pure boredom," I would suggest that it doesn't matter at all, no. :-)
What is the best way to parse relatively simple text files?
Well, I don't know that you're going to like my answer. Maybe first read some of the other fine answers here.
No, really, go ahead. I'll wait.
Ah, you're back and relaxed. Let's ease into things, shall we?
Never write it in 'C' if you can do it in 'awk';
Never do it in 'awk' if 'sed' can handle it;
Never use 'sed' when 'tr' can do the job;
Never invoke 'tr' when 'cat' is sufficient;
Avoid using 'cat' whenever possible.
-- Taylor's Laws of Programming
If you're writing it in C, but C feels like the wrong tool...it really might be the wrong tool. awk or perl will likely do what you're trying to do without all the aggravation. You may even be able to do it with cut or something similar.
On the other hand, if you're writing it in C, you probably have a good reason to write it in C. Maybe your parser is a tiny part of a much larger system, which, for the sake of argument, is embedded, in a refrigerator, on the moon. Or maybe you loooove C. You may even hate awk and perl, heaven forfend.
If you don't hate awk and perl, you may want to embed them into your C program. This is doable, in principle--I've never done it myself. For awk, try libmawk. For perl, there are proably a few ways (TMTOWTDI). You can run perl separately using popen to start it, or you can actually embed a Perl interpreter into your C program--see man perlembed.
Anyhow, as I've said, "the best way to parse" entirely depends on you and your team, the problem space, and your approach to the issue. What I can offer is my opinion.
I'm going to assume that in your C-only solutions (library functions and FSM (considering your explode to essentially be a library function)) you've already done your best at isolating the relevant code, designing the code and files well, and so forth.
Even so, I'm going to recommend lex and yacc.
Library functions feel "clumsy and awkward." A state machine seems unmaintainable. But you say that lex and yacc feel like overkill.
I think you should approach your complaints differently. What you're really doing is specifying a FSM. However, you're also hiring someone to write and maintain it for you, thereby solving most of the maintainability problem. Overkill? Did I mention they'll work for free?
I suspect, but do not know, that the reason lex and yacc originally felt like overkill was that your config / simple files just felt too, well, simple. If I'm right (a big if), you may be able to do most of your work in the lexer. (It's even conceivable that you can do all of your work in the lexer, but I know nothing about your input.) If your input is not only simple but widespread, you may be able to find a lexer/parser combination freely available for what you need.
In short: if you can do this not in C, try something else. If you want C, use lex and yacc--they have a little overhead, but they're a very good solution.
If you can get it to work, I'd go with an FSM, but with a huge assist from Perl-compatible regular expressions. This library is easy to understand, and you ought to be able to trim back sufficient extraneous spaghetti to give your monster that aerodynamic flair to which all flying monsters aspire. That, and plenty of comments in well-structured spaghetti, ought to make your code-maintaining successor comfortable. (And, as I'm sure you know, that code-maintaining successor is you after six months, when you've moved on to something else and the details of this code have slipped your mind.)
My short answer is to use the right too for the problem. If you have configuration files use existing standards and formats e.g. ini Files and parse them using Boost program_options.
If you enter the world of "own" languages use lex/yacc, since they provide you with the required features, but you have to consider the cost of maintaining the grammar and language implementation.
As a result I would recommend to further narrow you problem scope to find the right tool.

How do I explain my colleagues that filenames should not contain uppercase characters or special characters?

For what I know, it's best practice to name files like this: file_name.txt - or if you want to file-name.txt.
Now some people seem to like to name their files fileName.txt or FILENAME.TXT or "File Name.txt" - how do explain them that it's not a good idea? Why exactly is the aforementioned file naming best practice?
I only vaguely know some file systems have trouble with uppercase, and that URIs should be lowercase only to avoid confusion (Wikipedia does have uppercase characters in their URLs though e.g. http://en.wikipedia.org/wiki/Sinusitis )
W.
Well, a problem with uppercase letters would be that some filesystems (like NTFS) ignore them and treat filename.txt and FILENAME.TXT as the same file, whereas other filesystems (ext for example, I think) thinks of these as 2 different files.
So, if you have some reference to a file that you called file.txt, and the reference points to the file File.txt, then on NTFS this would be no problem, but if you copy the files to a file system like ext, the reference would fail because the filesystem thinks there is no such file as File.txt.
Because of this, it's best practice to always use lowercase letters.
If your colleagues are clueless, then you might be able to convince them that ALL CAPS takes more storage, uses more power, and is more likely to get deleted.
However, if they are as knowledgeable about filenames as you, there's little you can to get them to side with your preference.
In this situation, I like to take the absurdist approach, to help my colleagues want to have a reasonable approach. I suggest you start naming files with CrAzY cAsE. After a few directories of CrAzY cAsE, your ALL CAPS colleagues and your lowercase colleagues will come to you and ask you to stop. You then say, Well we should have standard naming convention, I'm impartial to the results if we can agree on a standard. Then nudge the discussion toward lower case names, and declare that as the binding compromise.
Maximilian has good point!
It's best practice to avoid the possibility of confusion (dissimilar names treated as identical) but I work in a place where various systems are used, from DOS to Windows to Unix, and I have never been able to convince those users that the CAPS LOCK should be avoided.
Since I mostly deal with Unix-like systems, I would dearly love to legislate for lower-case everywhere, but I'm beating my head against a brick wall.
Best Practice is an alien concept to most computer users.
If your colleagues are programmers you might stand a chance.
The argument that all lower case is the 'best practice' would easily vindicate using all CAPS as best practice as well.
I think it's fair to say that the vast majority of users don't operate in multi-platform environments, or at least not in a manner that's likely to cause them to encounter the issue raised here.
The issue is really only a problem when copying from a case-sensitive environment to a non-case sensitive one where you have multiple variants of a filename within a single directory (somewhat unlikely). The idea that a file reference would be the crux for me falls down when you consider that directory structure variation is likely to be an equal issue in such situations.
At the end of the day, in a corporate environment, there should be a published standard for such things that everyone is at least encouraged to follow, that for me is best practice. Those that follow the standard don't only have themselves to blame.
The POSIX standard (IEEE Std 1003.1) defines a character set for portable filenames (however, it does indicate that case should be preserved). At least it removes spaces and other "special" characters from the set.
The set is, from memory: [a-zA-Z0-9_-.]

Why should I use a human readable file format?

Why should I use a human readable file format in preference to a binary one? Is there ever a situation when this isn't the case?
EDIT:
I did have this as an explanation when initially posting the question, but it's not so relevant now:
When answering this question I wanted to refer the asker to a standard SO answer on why using a human readable file format is a good idea. Then I searched for one and couldn't find one. So here's the question
It depends
The right answer is it depends. If you are writing audio/video data for instance, if you crowbar it into a human readable format, it won't be very readable! And word documents are the classic example where people have wished they were human readable, so more flexible, and by moving to XML MS are going that way.
Much more important than binary or text is a standard or not a standard. If you use a standard format, then chances are you and the next guy won't have to write a parser, and that's a win for everyone.
Following this are some opinionated reasons why you might want to choose one over the other, if you have to write your own format (and parser).
Why use human readable?
The next guy. Consider the maintaining developer looking at your code 30 years or six months from now. Yes, he should have the source code. Yes he should have the documents and the comments. But he quite likely won't. And having been that guy, and had to rescue or convert old, extremely, valuable data, I'll thank you for for making it something I can just look at and understand.
Let me read AND WRITE it with my own tools. If I'm an emacs user I can use that. Or Vim, or notepad or ... Even if you've created great tools or libraries, they might not run on my platform, or even run at all any more. Also, I can then create new data with my tools.
The tax isn't that big - storage is free. Nearly always disc space is free. And if it isn't you'll know. Don't worry about a few angle brackets or commas, usually it won't make that much difference. Premature optimisation is the root of all evil. And if you are really worried just use a standard compression tool, and then you have a small human readable format - anyone can run unzip.
The tax isn't that big - computers are quick. It might be a faster to parse binary. Until you need to add an extra column, or data type, or support both legacy and new files. (though this is mitigated with Protocol Buffers)
There are a lot of good formats out there. Even if you don't like XML. Try CSV. Or JSON. Or .properties. Or even XML. Lots of tools exist for parsing these already in lots of languages. And it only takes 5mins to write them again if mysteriously all the source code gets lost.
Diffs become easy. When you check in to version control it is much easier to see what has changed. And view it on the Web. Or your iPhone. Binary, you know something has changed, but you rely on the comments to tell you what.
Merges become easy. You still get questions on the web asking how to append one PDF to another. This doesn't happen with Text.
Easier to repair if corrupted. Try and repair a corrupt text document vs. a corrupt zip archive. Enough said.
Every language (and platform) can read or write it. Of course, binary is the native language for computers, so every language will support binary too. But a lot of the classic little tool scripting languages work a lot better with text data. I can't think of a language that works well with binary and not with text (assembler maybe) but not the other way round. And that means your programs can interact with other programs you haven't even thought of, or that were written 30 years before yours. There are reasons Unix was successful.
Why not, and use binary instead?
You might have a lot of data - terabytes maybe. And then a factor of 2 could really matter. But premature optimization is still the root of all evil. How about use a human one now, and convert later? It won't take much time.
Storage might be free but bandwidth isn't (Jon Skeet in comments). If you are throwing files around the network then size can really make a difference. Even bandwidth to and from disc can be a limiting factor.
Really performance intensive code. Binary can be seriously optimised. There is a reason databases don't normally have their own plain text format.
A binary format might be the standard. So use PNG, MP3 or MPEG. It makes the next guys job easier (for at least the next 10 years).
There are lots of good binary formats out there. Some are global standards for that type of data. Or might be a standard for hardware devices. Some are standard serialization frameworks. A great example is Google Protocol Buffers. Another example: Bencode
Easier to embed binary. Some data already is binary and you need to embed it. This works naturally in binary file formats, but looks ugly and is very inefficient in human readable ones, and usually stops them being human readable.
Deliberate obscurity. Sometimes you don't want it obvious what your data is doing. Encryption is better than accidental security through obscurity, but if you are encrypting you might as well make it binary and be done with it.
Debatable
Easier to parse. People have claimed that both text and binary are easier to parse. Now clearly the easiest to parse is when your language or library supports parsing, and this is true for some binary and some human readable formats, so doesn't really support either. Binary formats can clearly be chosen so they are easy to parse, but so can human readable (think CSV or fixed width) so I think this point is moot. Some binary formats can just be dumped into memory and used as is, so this could be said to be the easiest to parse, especially if numbers (not just strings are involved. However I think most people would argue human readable parsing is easier to debug, as it is easier to see what is going on in the debugger (slightly).
Easier to control. Yes, it is more likely someone will mangle text data in their editor, or will moan when one Unicode format works and another doesn't. With binary data that is less likely. However, people and hardware can still mangle binary data. And you can (and should) specify a text encoding for human-readable data, either flexible or fixed.
At the end of the day, I don't think either can really claim an advantage here.
Anything else
Are you sure you really want a file? Have you considered a database? :-)
Credits
A lot of this answer is merging together stuff other people wrote in other answers (you can see them there). And especially big thanks to Jon Skeet for his comments (both here and offline) for suggesting ways it could be improved.
It entirely depends on the situation.
Benefits of a human readable format:
You can read it in its "native" format
You can write it yourself, e.g. for unit tests - or even for real content, depending on what it's for
Probable benefits of a binary format:
Easier to parse (in terms of code)
Faster to parse
More efficient in terms of space
Easier to control (any time you need text in there, you can ensure it's UTF-8 encoded, and length prefixed etc)
Easier to include opaque binary data efficiently (images, etc - with a text format you'd be getting into base64)
Don't forget that you can always implement a binary format but produce tools to convert to/from a human-readable format as well. That's what the Protocol Buffers framework does - it's actually pretty rare IME to need to parse a text version of a protocol buffer, but it's really handy to be able to write it out as text.
EDIT: Just in case this ends up being an accepted answer, you should also bear in mind the point made by starblue: Human readable forms are much better for diffing. I suspect it would be feasible to design a binary format which is appropriate for diffing (and where a human-readable diff could be generated) but out-of-the-box support from existing diff tools will be better for text.
Version control is easier with text formats, because changes can easily be viewed and merged.
Especially MS-Word is giving us grief in this respect.
Open format -- no binary bit juggling
Readability :)
Interchange across platforms
Debugging aid
Easily parsed (and easily converted to any format)
One important point: you write a parser once, but read the output many times. That kind of tilts the balance in favor of HRF.
A major reason is that if someone needs to read the data say, 30 years from now, human readable format can be figured out. Binary is much more difficult.
If your have large data sets that are binary by nature (e.g. images), they obviously can't be stored in any other than binary form. But even then, the metadata could (and should!) be human-readable.
There's something called The Art of Unix Programming.
I won't say it's good or bad, but it's fairly famous. It has a whole chapter called Textuality in which the author asserts that human readable file format are an important part of the Unix way of programming.
They open the possibility to be created/edited with tools other than the original ones. New and better tools can be developed by others, integration into third party applications becomes possible. Think about binary iCal files, for example - would the format have been a success?
Apart from that: Human readable files improve the ability to debug or, for the savvy user, at least find the reason an error.
Pros for binary:
fast to parse
generally smaller data
easy to write a parser for
Pros for human readable:
easier to understand while reading - no "field X is set to 4 487 which means that the reactor should be shut down NOW"
if using something like XML easy to write a tool that will parse any file
I have had to deal with both types. If you are sending data and you want to keep it small binary is good. If you expect people to read it then human readable is good.
Human readable generally somewhat self documenting as well. And with binary it is bery easy to make mistakes - and hard to spot them.
Editable
Readable (duh!)
Printable
Notepad and vi enabled
Most importantly , their function can be decuded from the content (well mostly)
Because you are a human, and sooner or later you (or one of your customers) will be able to read the data.
We only use binary format if speed is an issue. And even then debugging is troublesome so we added a human readable equivalent.
Interoperability is the standard argument, i.e. a human-readable form is easier for developers of disparate systems to deal with so therefore confers some advantage.
Personally I think that is not true, and the performance benfits of binary files ought to beat that argument, especially if you publish your protocol. However the ubiquity of XML/HTTP based frameworks for machine interactions means that it is easier to adopt.
XML is way over-used.
Just a quick illustration where human-readable document format can be a better choice:
documents used for deploying application in production
We used to have our release notes in word format, but that release notes document had to be opened on various environment (Linux, Solaris) in pre-production and production plateform.
It also had to be parsed in order to extract various data.
In the end, we switched to a wiki-based syntax, still displayed nicely in HTML through a wiki, but still used as a simple text file in other situation.
As an adjuct to this, there are differing levels of human readability, and all are enhanced by using a good editor or viewer with code coloring, folding or navigation.
For example,
JSON is quite readable even in plaintext
XML has the angle bracket tax but is usable when using a good editor
INI is mostly human readable
CSV can be readable, but is best when loaded into a spreadsheet.
No one said, so I will: human-readability is not really a property of a file format (all files are binary after all), but rather of a file format and viewer app combination.
So called human readable formats are all based on top of additional abstraction layer of one of existing text encodings. And viewer programs (often also serving as an editor) that are capable of rendering these encodings in a form readable by humans are very common.
Text encoding standards are widespread and fairly mature, which means they're unlikely to evolve much in the foreseeable future.
Usually on top of the text encoding layer of the format we find a syntax layer that is reasonably intuitive given target user knowledge and cultural background.
Hence the benefits of "human-readable" formats:
Ubiquity of suitable viewers and editors.
Timelessness (given that cultural conventions won't change much).
Easiness-to-learn, read and modify.
Reliance on the extra abstraction layer makes text encoded files:
Space hungry.
Slower to process.
"Binary" files do not resort to text encoding abstraction layer as a base (or a common denominator), but they might or might not use some sort of an extra abstraction more suitable for their purpose and hence, they can be much better optimised for a specific task at hand meaning:
Faster processing.
Smaller footprint.
On the other hand:
Viewers and editors are specific for a particular binary format and make interoperability harder.
Viewers for any given format are less wide spread, because they are more specialised.
Formats might evolve significantly or go out of use over time: their main benefit in being very well suited for a particular task and as the task or task requirements evolve, so does the format.
Take a moment and think about application OTHER than web development.
The assumption that:
A) It has a meaning that is "obvious" in text format is false.
Things like control systems for a steel mill, or manufacturing plant don't typically have any advantage in being human readable. The software for those types of environments will typically have routines to display data in a graphically meaningful manner.
B) Outputting it in text is easier. Unnecessary conversions that actually require more code make a system LESS robust. The fact of the matter if you are NOT using a language which treats all variables as strings then human readable text is an extra conversion. I.E. Extra code means more code to be verified, tested and more opportunities to intro errors in the application.
C) You have to parse it anyway. It many cases for DSP systems I've worked on (I.E. NO Human readable interface to start with.) Data is streamed out of the system in uniformly sized packets. Logging the data for analysis and later processing is simply a matter of pointing to the beginning of a buffer and writing a multiple of the block size to the data logger system. This allows me to analysis the data "untouched" as the customer's system would see it where, once again, converting it to a different format would result in possibly introducing errors. Not only that, if you only save the "converted data" you may lose information in the translation that may help you diagnose a problem.
D) Text is a Natural format for the data. No hardware I've ever seen uses a "TEXT" interface. (My first job out of college was writing a device driver for a camera line scan camera.) The system build on top of it does MIGHT, but for every "PC".
For web pages where the information has a "natural" meaning in text format, so sure knock yourself out. For processing source code it’s a no brainer, of course. But the pervasive computing environments where even you refrigerator and TOOTHBRUSH are going to have a processor built in, not so much. Simply burdening these type of systems with the overhead of adding the ability to process text introduces unnessary complexity. You're not going to link "printf" into the software for an 8-bit micro that controls a mouse. (And yeah, somebody has to write that software too.)
The world is not a black and white place where the only forms of computing that need to be consider are PCs and Web servers.
Even on a PC, if I can directly load the data directly into a datastructure using a single OS read call and be done with it without writing serialize and deserializing routines, that's fantastic, check a blocks CRC job -- done on to the next problem.
Uhm… because human-readable file formats can be read by humans? Seems like a pretty good reason to me.
(Well, for configuration files it’s inevitable that they are read (and edited!) by humans. Files for persistent storage of some sort or the other don’t really need to be read or edited by humans.)
Why should I use a human readable file
format in preference to a binary one?
Is there ever a situation when this
isn't the case?
Yes, compressed volumes (zip, jpeg, mp3, etc) would be suboptimal if they were human readable.
I guess its not good in most situations probably. I think the main reason for these formats such as JSON and XML is because of web development, and general use over the web where you need to be able to process data on the user-side and you cant necessarily read binary. A good example of a bad case to use a human readable format would be any thing non textual such as images, video, audio. Ive noticed the use of non-binary formats being used in web development where it does not make sense, I feel guilty!
Often files become part of your human interface thus they should be human friendly (not programmer only)
The only time that I use a binary stream for files that aren't archives is when I want to conceal things from the casual observer. For instance, if I'm making temporary files that only my application should be editing, I'll use binary.
Its not an attempt to obfuscate, rather its just discouraging the user from editing the file by hand (which could break the application).
One instance where this would be a good idea is storing / saving running data about some game.. i.e. to save your game and continue later. Other scenarios would describe intermediate files, but those are typically binary / byte compiled anyway.
Why should I use a human readable file
format in preference to a binary one?
Depends on the content and context, i.e. where is the data coming from and going. If the data is typically directly written by a human, storing it in an format that can be manipulated through a text editor is a good idea. For example, program source code will normally be stored as human readable with good reason. However, if we are archiving it, or sharing it using a version control system, our storage strategy will change.
The human format is simplier to parsing and debugging if you have a problem with a field (example: a field contains a number where the spec says the this field must be a string), also the human format is closier to domain of problem.
I prefer the binary format with a lot of data AND i'm sure that I have the software for parsing him :)
When reading Fielding's dissertation about REST, I really liked the concept of "Architectural Properties"; one that sticked was "Visibility". That's what we're talking about here: being able to 'see' the data. Huge benefits when debugging the system.
One aspect that I find missing in the other answers: enforcing semantics.
From the moment you go for human readable, you allow the silly notepad user to create data to be fed into the system. No way to guarantee this data makes sense. No way to guarantee the system will respond in a sensible way.
So in the case you don't need to notepad-inspect your data, and you want to enforce valid data (by e.g. usage of an API) rather than first validating it, you better avoid human readable data. If debuggeability is an issue (it most often is), inspection of the data can be done by using the API, too.
Human readable is not equal to easier to be parsed by machine code.
Take human natural language as an example. :) Machine parsing of human language is still a pending problem to be fully solved.
So I agree with https://stackoverflow.com/a/714111/2727173 which has much deeper insight on this question.

What should I know before poking around an unknown archive file for things?

A game that I play stores all of its data in a .DAT file. There has been some work done by people in examining the file. There are also some existing tools, but I'm not sure about their current state. I think it would be fun to poke around in the data myself, but I've never tried to examine a file, much less anything like this before.
Is there anything I should know about examining a file format for data extraction purposes before I dive headfirst into this?
EDIT: I would like very general tips, as examining file formats seems interesting. I would like to be able to take File X and learn how to approach the problem of learning about it.
You'll definitely want a hex editor before you get too far. It will let you see the raw data as numbers instead of as large empty blocks in whatever font notepad is using (or whatever text editor).
Try opening it in any archive extractors you have (i.e. zip, 7z, rar, gz, tar etc.) to see if it's just a renamed file format (.PK3 is something like that).
Look for headers of known file formats somewhere within the file, which will help you discover where certain parts of the data are stored (i.e. do a search for "IPNG" to find any (uncompressed) png files somewhere within).
If you do find where a certain piece of data is stored, take a note of its location and length, and see if you can find numbers equal to either of those values near the beginning of the file, which usually act as pointers to the actual data.
Some times you just have to guess, or intuit what a certain value means, and if you're wrong, well, keep moving. There's not much you can do about it.
I have found that http://www.wotsit.org is particularly useful for known file type formats, for help finding headers within the .dat file.
Back up the file first. Once you've restricted the amount of damage you can do, just poke around as Ed suggested.
Looking at your rep level, I guess a basic primer on hexadecimal numbers, endianness, representations for various data types, and all that would be a bit superfluous. A good tool that can show the data in hex is of course essential, as is the ability to write quick scripts to test complex assumptions about the data's structure. All of these should be obvious to you, but might perhaps help someone else so I thought I'd mention them.
One of the best ways to attack unknown file formats, when you have some control over contents is to take a differential approach. Save a file, make a small and controlled change, and save again. Do a binary compare of the files to find the difference - preferably using a tool that can detect inserts and deletions. If you're dealing with an encrypted file, a small change will trigger a massive difference. If it's just compressed, the difference will not be localized. And if the file format is trivial, a simple change in state will result in a simple change to the file.
The other thing is to look at some of the common compression techniques, notably zip and gzip, and learn their "signatures". Most of these formats are "self identifying" so when they start decompressing, they can do quick sanity checks that what they're working on is in a format they understand.
Barring encryption, an archive file format is basically some kind of indexing mechanism (a directory or sorts), and a way located those elements from within the archive via pointers in the index.
With the the ubiquitousness of the standard compression algorithms, it's mostly a matter of finding where those blocks start, and trying to hunt down the index, or table of contents.
Some will have the index all in one spot (like a file system does), others will simply precede each element within the archive with its identity information. But in the end somewhere, there is information about offsets from one block to another, there is information about data types (for example, if they're storing GIF files, GIF have a signature as well), etc.
Those are the patterns that you're trying to hunt down within the file.
It would be nice if somehow you can get your hand on two versions of data using the same format. For example, on a game, you might be able to get the initial version off the CD and a newer, patched version. These can really highlight the information you're looking for.

Resources