Analyze EPS metadata - Distinguish between Freehand EPS and Illustrator EPS - file

I want to analyze a few thousand EPS files and sort them according to their application in which they have been created (in my case Freehand or Illustrator).
I know that the the information is stored in the file header (%%Creator), but how can I analyze it thoughout all files?
I am looking for a tool to analyze the files and to give me a txt or csv file...
I already tried the following:
Windows Explorer --> shows lots of properties, but not the %%Creator
Adobe Bridge --> shows file metadata, but only of single files
Adobe Bridge, MetaData Script (See the Bridge section -- Extract Metadata) --> the script should export all metadata, but doesn't do it's job for all files...
pdfinfo.exe --> part of Xpdf, I assumed, if it can analyze PDF files, it may also analyze EPS, but does't...
Ghostscript --> I searched Google, but did not find a solution
ImageMagick --> I searched Google, but did not find a solution
ExifTool --> although it seems to be powerful, I could not manage to get the information intended
I am happy for any help!
Best regards
Hape

Write a simple script here a python 2.x script for example:
import glob
for fname in glob.glob('*.eps'):
with open(fname) as fp:
line = fp.readline()
if not '%!PS' in line:
continue
print "-- %s --" % fname
line = fp.readline().strip()
while line.startswith("%"):
if line.startswith("%%"):
print line
line = fp.readline().strip()
Edity: Updated script to understand windows style EPS previews. And relaxed the metadata requirements to match spec better

Related

How would I store different types of data in one file

I need to store data in a file in this format
word, audio, jpeg
How would I store that all in one file? Is it even possible do would I need to store links to other data files in place of the audio and jpeg. Would I need a custom file format?
1. Your own filetype
As mentioned by #Ken White you would need to be creating your own custom file format for this sort of thing, which would then mean creating your own parser type. This could be achieved in almost any language you wanted but since you are planning on using word format, then maybe C# would be best for you. However, this technique could be quite complicated and take a relatively large amount of time to thoroughly test your file compresser / decompressor, but may be best depending on your needs.
2. Command line utilities
Another way to go about this would be to use a bash script to combine all of the files into one file, and then decompress it at the other end. For example the steps could involve:
Combine files using windows copy / linux cat command on command line
Create a metdata file of your own that says how many files are in this custom file, and how much memory each one takes up (could be a short XML or JSON file for example...)
Use the linux split command or install a Windows command line file splitter program (here's just one example) to split the file back into whatever components have made it up.
This way you only have to create a really small file type, and let the OS utilities handle the combining of them for you.
Example on Windows:
Copy all of the files in your current directory into one output file called 'file.custom'
copy /b * file.custom
Generate custom file format describing metadata (i.e. get the file size on disk in C# example here). This is just maybe what I would do in JSON. SO formatting was being annoying so here's a link (Copy paste it into an editor or online JSON viewer).
Use a decompress windows / linux command line tool to decompress each files to the exact length (and export it back to the exact name) specified in the JSON (metadata) file. (More info on splitting files on this post).
3. ZIP files
You could always store all of the files in a compressed zip file, and then just use a zip compressor, expander as and when you like to retreive any number of file formats stored within.
I found a couple of examples of :
Combining multiple files into one ZIP file in only C# .net,
Unzipping ZIP files in C#
Zipping & Unzipping with only windows built-in utilities
Zipping & Unzipping in Linux command line
Good Zipping/Unzipping library in Java
Zipping/Unzipping in Python

Visualizing A Directory's File Structure and File Contents

I am interested in creating a tool that can be run from the command line (pref. linux) and which will scour a select group of files (think .gitignore in reverse), parse those files, and create an image that looks like this Conceptboard. It doesn't need to look exactly like that, but I should be able to control graphical effects and layout of the boxed elements. Each boxed element should simply have the file name up top, and the contents of the file below.
My questions are:
What tools are available for this type of project?
Where would I logically start?
How long should I expect this to take me?
edit: graphviz could definitely work, but is it able to show file contents?
Graphviz can't directly show file contents, but I've often coupled it with code in java or python to build the appropriate .dot file, then run graphviz on that .dot file to produce the final graphic, something like this pseudocode:
open .dot text file and add graphviz header information
for each file in directory:
save filename for future reference
read and save file contents
create node in .dot file formatted with filename at top and contents below
for each file in directory:
parse file to locate links to other files
if filename has been processed above, add a link line to .dot output file
add graphviz footer to .dot file
run DOT on .dot file
the final .dot file would look something like this, only much longer:
digraph fileStructure {
node [shape=box, color=black, fontsize=14, fillcolor=white, style=filled]
1 [label="filename\nfile contents"]
2 [label="filename\nfile contents"]
3 [label="filename\nfile contents"]
1 -> 2
1 -> 3
}
There are libraries for most major programming language to simplify the process of creating .dot files and running graphviz, but it's not too hard to do directly.
How long it would take depends on your skill level, but I wouldn't think this would take more than a few hours to complete.

Easy way to make a test PDF of a certain file size

want to test the upload file size limit in an application, and it's a pain finding / making various pdf's of certain sizes to test / debug this. Anybody have a better way?
You can write a simple shell script that converts set of images to pdf: How can I convert a series of images to a PDF from the command line on linux? and do it for 1,2,3, ..., all image files in certain directory.
Creating directory full of copies of single image, should be simple too, start with one image file with desired size e.g. 64KB.
# pseudocode - don't test it
END=5
for i in {1..$END}; do cp ./image ./image_$i; done
for i in {1..$END}; do convert ./image_{1..$i} mydoc_$i.pdf; done
I've found an online tool, however, it seems to not be working correctly since it can only generate 10MB files even though you tell it to make a 50MB file.
https://www.fakefilegenerator.com/generate-file.php

Shell Script to typeset Lilypond files with Textwrangler

I need a shell script which will allow me to typeset Lilypond files from TextWrangler (A Mac App).
So far I have come up with this:
#!/bin/sh
/Applications/LilyPond.app/Contents/Resources/bin/lilypond -o $1
which, of course, doesn't work. (That's why I'm at Stack Overflow.)
When I run that script from the shebang menu in TextWrangler, I get this output:
/Applications/LilyPond.app/Contents/Resources/bin/lilypond: option faultpaper,
--output'' requires an argument
What gives?
I'm running Snow Leopard, TextWrangler, and Lilypond.
Help appreciated.
EDIT: Found a way to get the document path in a Unix Script launched by TextWrangler, so I've rewritten this.
There are multiple ways to work with scripts in TextWrangler through the #! menu, and I'm not sure which one you're trying to use. It looks, though, like you're trying to create a Unix Script to convert your LilyPond document.
As your error hints, Unix Scripts unfortunately aren't given any arguments at all, so $1 will be empty. However, it turns out that recent versions of BBEdit/TextWrangler do set some environment variables before running your script (see BBEdit 9.3 Release Notes and scroll down to Changes). In particular, you can use the following environment variable:
BB_DOC_PATH path of the document (not set if doc is unsaved)
So, save this script to ~/Library/Application Support/TextWrangler/Unix Support/Unix Scripts and you should be good to go.
Other ways you might be trying to do this that don't work well:
Using a Unix Filter: to do this you would have to select all of your LilyPond code in the document, and it would be saved into a temporary file, which is passed as an argument to your script. OK, so that gets you an input filename, at the cost of some hassle. But then the output of that script (i.e. the LiiyPond compiler output) by default replaces whatever you just selected, which is probably not what you want. Wrong tool for the job.
Using #! → Run on a LilyPond file: This involves putting a #! line at the top of your file and having TextWrangler attempt to execute your file as a script, using the #! as a guide to selecting the script interpreter. Unfortunately, the #! line only works with certain scripting languages, and LilyPond (not quite a scripting language) isn't one of them. This is what Peter Hilton is trying to do, and as he notes, you will get LilyPond syntax errors if you try to add a #! line to the top of a LilyPond file. (If you're curious, there is technically a way to get #! → Run to work, which is to embed your LilyPond code inside an executable shell or perl script, using here-document syntax. But this is a gross hack that will quickly become unwieldly.)
There are a few limitations to the script linked above:
It doesn't check to see whether you saved your document before running LilyPond. It would be nice to have TextWrangler automatically save before running LilyPond.
It can't take snippets of text or unsaved documents as input, only saved documents.
You can make more sophisticated solutions that would address these by turning to AppleScript. Two ways of doing this:
Create a script that's specific to TextWrangler and drop it in ~/Library/Application Support/TextWrangler/Scripts. It then shows up in the AppleScript menu (the weird scrolly S), or you can get at it by bringing up Window → Palettes → Scripts. I believe two folks out there have gone down this path and shared their results:
Henk van Voorthuijsen (Lilypond.applescript extracted from MacOS 10.5 Applescript for TextWrangler thread on lilypond-devel, 21-Jul-2008)
Dr Nicola Vitacolonna (LilyPond in TextWrangler – uses TeXShop).
Create a Mac OS Service, which would potentially be a method that would be reusable across just about any text editor. This was how we used to compile Common Music files way back in the NeXT days, so I can testify to its elegance. I don't have a good up-to-date example of this, unfortunately.
Good question. It actually runs Lilypond on my system if you do this:
#!/Applications/LilyPond.app/Contents/Resources/bin/lilypond -o $1
… but fails because # is not a line-comment character so Lilypond tries to parse the line.
Surrounding it with a block comment fails because TextWrangler cannot find the ‘shebang’ line.
%{
#!/Applications/LilyPond.app/Contents/Resources/bin/lilypond -o $1
%}
An alternative is to use Smultron 3, which lets you define commands that you can run with a keyboard shortcut.

Combining files in Notepad++

Is there a Notepad++ plugin out there that automatically combines all currently opened files into a single file?
Update: Yes, I am very aware of copy and paste :) I'm working with lots of files, and I want a solution that makes this step in the process a little bit faster than several dozen copy and pastes.
I'm aware of utilities for combining files, but I want the convenience of combining specifically the files that are currently opened in my text editor.
If there isn't a plugin out there already, I'll write one myself; I was just wondering if it exists already to save me the time of developing one.
I used the following command in DOS prompt to do the merge for me:
for %f in (*.txt) do type "%f" >> output.txt
It is fast and it works. Just ensure that all the files to be merge are in the same directory from where you execute this command.
http://www.scout-soft.com/combine/
Not my app, but this plug in lets you combine all open tabs into one file.
I installed the Python Script plugin and wrote a simple script:
console.show()
console.clear()
files = notepad.getFiles()
notepad.new()
newfile = notepad.getCurrentFilename()
for i in range(len(files) - 1):
console.write("Copying text from %s\n" % files[i][0])
notepad.activateFile(files[i][0])
text = editor.getText()
notepad.activateFile(newfile)
editor.appendText(text)
editor.appendText("\n")
console.write("Combine Files operation complete.")
It looks at all the files currently opened in Notepad++ and adds them to a new file. Does exactly what I need.

Resources