Getting JPEG Redundant Data - c

I am doing some project related to image compression and I need a way to save the data lost in JPEG compression (like bits per pixel..). I guess I would need to build a custom libjpeg for that. Appreciate any suggestions/help on the subject (maybe even guidance to what part to modify in the source code).
Thanks in advance!
Edit: To clarify myself, I am not looking into embedding hidden information. I am looking for a method to get the data lost during JPEG compression. I am also OK with getting the data lost from re-compressing a JPEG image (from 90 to 80).

If you are in need to embed private data into JPEG bitstream, you might want to take advantage of APPn markers. There are few great things about them:
the image will still be readable and compatible with software out there
the format is simple enough so that you can leave libjpeg or your another favorite JPEG library intact, and add/read the data modifying the bitstream directly
JPEG File Interchange Format is using APP0 and APP1, you can read the details and there are still more available markers like APP2 which you can use for your purposes.

There's at least four steps where you can lose information in jpeg compression. I don't really know what you're getting at. If you want to measure the lost information, you can just compress/decompress and compare with the original.
I guess you want to encode RGB to standard JFIF, then you lose information in the color conversion, subsampling, after that you have to do FDCT and I don't think that is exactly reversible so you lose information in that step and then you have the quantization step. Unless you have quantization tables containing all ones you will lose information there also.
To sum it up:
Color conversion
Subsamling
FDCT/IDCT
Quantization

Related

Storing custom information in an audio/mp3 file

I want to store some custom information (tags) in a mp3 file (or even better, in any audio file, but mp3 would be a start).
What would be a good way to do that?
ID3? If yes, in what ID3-Section should that be (it shouldn't be overwritten by other programs). I thought of the "comment" section, but it is overwritten quite frequently, I think.
Is there an easy way to store information to any audio file?
UPDATE:
I decided to store the information into a custom ID3 Tag (with my own name) via the MyID3 library :)
It depends: If you want to use this custom information only for your own collection, you can store it to whatever tag you like. If you hope that your custom information can be read by (nearly) all popular music management tools, I suggest that you store it to the defined tags according the ID3v2.3 standard.
If your custom information doesn't fit into one of this tags, lets assume an example like: "eye color of the lead singer", you could make your own private tag. This is noted in 4.28 private frame with the description:
This frame is used to contain information from a software producer that its program uses and does not fit into the other frames.

Using video/image files in C

I'm trying to look through and find a way to annotate a video in C with polygons bounding boxes, however I'm stuck at a very elementary step.
Assuming I know how to break a .MPEG movie up into multiple JPEG images, how do I manipulate that file in C? The things I'll eventually need to draw on are text, points, and lines, but I am having a hard time figuring out how to get started with this.
If I declare:
FILE* img = fopen('foo.jpeg', 'r');
then what could I do with img? Is there a way to access certain pixels in the drawing?
What you did in your code sample is just opening a file. You didn't even read any data from it yet.
The simplest way to load an image file is to use dedicated library, such as SOIL.
If you weren't able to do it by yourself, however, I really don't think you will be able to accomplish your project goals - it is really advanced stuff you want to create, and you failed, as you already noticed, on the most basic of steps.

How to compare two .mp4 files?

I would like to compare two mp4 files, does somebody has an idea?
Maybe by interposing the video spectrum?
Thanks.
I had an idea for this a while back. I never implemented it, but it went something like this:
Get a good video library to do the heavy lifting for you, I like Aforge.NET
Use the library to walk through the video and extract bitmap frames, get a few hundred
Fix the resolution to a single aspect ratio
Reduce the images to something low-res like 16x16 or 64x64, using a nearest neighbor approach. This will blur the image such that two similar images will reduce to the same
Gather a chunk of these images by relative video timestamp and hash them to further reduce the data
Compare said hashes
Again, I never implemented this, so I don't know if it works, but the thing it has going for it is that video is very complex. While comparing any given frame to another won't work, based on different formats, resolutions, etc., the odds of a series of reduced hashes being the same from two different videos seems very low. Thus, few false positives. Also it seems like it could also tell you if one span of video was contained in another.
If I get around to making something like this I'll circle back here and post about it.

Silverlight for WP7: Trim an existing media file

WP7 Mango is making it possible to save custom ringtones from apps. That's great and all, but not if your source material is too long in length (ringtones must be < 40 seconds or so).
I'm hoping it is possible to take an existing audio file (wma, lets say) and trim it by setting a start/end point, so you can export just a part of the audio for ringtone use.
I gather from other SO questions that audio encoding directly in silverlight is not really feasible. But I don't really want full encoding capabilities, just the ability to trim an existing already encoded file. Any pointers?
I was thinking about doing this as well (until I discovered that we have no access to the music already on the phone).
An mp3 should be pretty easy to do by checking the header (see here: http://www.mpgedit.org/mpgedit/mpeg_format/mpeghdr.htm) and then using the bit rate and frame size to calculate the number of bytes to copy using BinaryReader and BinaryWriter.
I haven't looked into wma but after glancing over the specifications it looks like it may be more complicated (specs: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=14995).

Streaming a webcam from Silverlight 4 (Beta)

The new webcam stuff in Silverlight 4 is darned cool. By exposing it as a brush, it allows scenarios that are way beyond anything that Flash has.
At the same time, accessing the webcam locally seems like it's only half the story. Nobody buys a webcam so they can take pictures of themselves and make funny faces out of them. They buy a webcam because they want other people to see the resulting video stream, i.e., they want to stream that video out to the Internet, a lay Skype or any of the dozens of other video chat sites/applications. And so far, I haven't figured out how to do that with
It turns out that it's pretty simple to get a hold of the raw (Format32bppArgb formatted) bytestream, as demonstrated here.
But unless we want to transmit that raw bytestream to a server (which would chew up way too much bandwidth), we need to encode that in some fashion. And that's more complicated. MS has implemented several codecs in Silverlight, but so far as I can tell, they're all focused on decoding a video stream, not encoding it in the first place. And that's apart from the fact that I can't figure out how to get direct access to, say, the H.264 codec in the first place.
There are a ton of open-source codecs (for instance, in the ffmpeg project here), but they're all written in C, and they don't look easy to port to C#. Unless translating 10000+ lines of code that look like this is your idea of fun :-)
const int b_xy= h->mb2b_xy[left_xy[i]] + 3;
const int b8_xy= h->mb2b8_xy[left_xy[i]] + 1;
*(uint32_t*)h->mv_cache[list][cache_idx ]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[0+i*2]];
*(uint32_t*)h->mv_cache[list][cache_idx+8]= *(uint32_t*)s->current_picture.motion_val[list][b_xy + h->b_stride*left_block[1+i*2]];
h->ref_cache[list][cache_idx ]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[0+i*2]>>1)];
h->ref_cache[list][cache_idx+8]= s->current_picture.ref_index[list][b8_xy + h->b8_stride*(left_block[1+i*2]>>1)];
The mooncodecs folder within the Mono project (here) has several audio codecs in C# (ADPCM and Ogg Vorbis), and one video codec (Dirac), but they all seem to implement just the decode portion of their respective formats, as do the java implementations from which they were ported.
I found a C# codec for Ogg Theora (csTheora, http://www.wreckedgames.com/forum/index.php?topic=1053.0), but again, it's decode only, as is the jheora codec on which it's based.
Of course, it would presumably be easier to port a codec from Java than from C or C++, but the only java video codecs that I found were decode-only (such as jheora, or jirac).
So I'm kinda back at square one. It looks like our options for hooking up a webcam (or microphone) through Silverlight to the Internet are:
(1) Wait for Microsoft to provide some guidance on this;
(2) Spend the brain cycles porting one of the C or C++ codecs over to Silverlight-compatible C#;
(3) Send the raw, uncompressed bytestream up to a server (or perhaps compressed slightly with something like zlib), and then encode it server-side; or
(4) Wait for someone smarter than me to figure this out and provide a solution.
Does anybody else have any better guidance? Have I missed something that's just blindingly obvious to everyone else? (For instance, does Silverlight 4 somewhere have some classes I've missed that take care of this?)
I just received this response from Jason Clary on my blog:
Saw your post on Mike Taulty's blog about VideoSink/AudioSink in Silverlight 4 beta.
I thought I'd point out that VideoSink's OnSample gives you a single uncompressed 32bpp ARGB frame which can be copied straight into a WritableBitmap.
With that in hand grab FJCore, a jpeg codec in C#, and modify it to not output the JFIF header. Then just write them out one after the other and you've got yourself an Motion JPEG codec. RFC2435 explains how to stuff that into RTP packets for RTSP streaming.
Compressing PCM audio to ADPCM is fairly easy, as well, but I haven't found a ready-made implementation as yet. RFC3551 explains how to put either PCM or ADPCM into RTP packets.
It should also be reasonably easy to stuff MJPEG and PCM or ADPCM into an AVI file. MS has some decent docs on AVI's modified RIFF format and both MJPEG and ADPCM are widely supported codecs.
It's a start anyway.
Of course, once you've gone through all that trouble, the next Beta will probably come out with native support for compressing and streaming to WMS with the much better WMV codecs.
Thought I'd post it. It's the best suggestion I've seen so far.
I thought I'd let interested folks know the approach I actually took. I'm using CSpeex to encode the voice, but I wrote my own block-based video codec to encode the video. It divides each frame up into 16x16 blocks, determines which blocks have sufficiently changed to warrant transmitting, and then Jpeg-encodes the changed blocks using a heavily modified version of FJCore. (FJCore is generally well done, but it needed to be modified to not write the JFIF headers, and to speed up initialization of the various objects.) All of this is being passed up to a proprietary media server using a proprietary protocol roughly based on RTP.
With one stream up and four streams down at 144x176, I'm currently getting 5 frames per second, using a total of 474 Kbps (~82 Kbps / video stream + 32 Kbps / audio), and chewing up about 30% CPU on my dev box. The quality's not great, but it's acceptable for most video chat applications.
Since I posted my original question, there have been several attempts to implement a solution. Probably the best is at the SocketCoder website here (and here).
However, because the SocketCoder motion JPEG-style video codec translates the entirety of every frame rather than just the blocks that have changed, my assumption is that CPU and bandwidth requirements are going to be prohibitive for most applications.
Unfortunately, my own solution is going to have to remain proprietary for the foreseeable future :-(.
Edit 7/3/10: I just got permissions to share my modifications to the FJCore library. I've posted the project (without any sample code, unfortunately) here:
http://www.alanta.com/Alanta.Client.Media.Jpeg.zip
A (very rough) example of how to use it:
public void EncodeAsJpeg()
{
byte[][,] raster = GetSubsampledRaster();
var image = new Alanta.Client.Media.Jpeg.Image(colorModel, raster);
EncodedStream = new MemoryStream();
var encoder = new JpegFrameEncoder(image, MediaConstants.JpegQuality, EncodedStream);
encoder.Encode();
}
public void DecodeFromJpeg()
{
EncodedStream.Seek(0, SeekOrigin.Begin);
var decoder = new JpegFrameDecoder(EncodedStream, height, width, MediaConstants.JpegQuality);
var raster = decoder.Decode();
}
Most of my changes are around the two new classes JpegFrameEncoder (instead of JpegEncoder) and JpegFrameDecoder (instead of JpegDecoder). Basically, the JpegFrameEncoder writes the encoded frame without any JFIF headers, and the JpegFrameDecoder decodes the frame without expecting any JFIF headers to tell it what values to use (it assumes you'll share the values in some other, out-of-band manner). It also instantiates whatever objects it needs just once (as "static"), so that you can instantiate the JpegFrameEncoder and JpegFrameDecoder quickly, with minimal overhead. The pre-existing JpegEncoder and JpegDecoder classes should work pretty much the same as they always have, though I've only done a very little bit of testing to confirm that.
There are lots of things I'd like to improve about it (I don't like the static objects -- they should be instantiated and passed in separately), but it works well enough for our purposes at the moment. Hopefully it's helpful for someone else. I'll see if I can improve the code/documentation/sample code/etc. if I have time.
I'll add one other comment. I just heard today from a Microsoft contact that Microsoft is not planning to add any support for upstream audio and video encoding/streaming to Silverlight, so option #1 appears to be off the table, at least for right now. My guess is that figuring out support for this will be the community's responsibility, i.e., up to you and me.
Stop-Gap?
Would it be possible to use the Windows Media Encoder as a compression method for the raw video Silverlight provides? After capture to ISO Storage, encode w/ WME and send to the server via the WebClient. Two big issues are:
Requires a user to install the encoder
WME will no longer be supported
It seems like that might be a stop-gap solution until something better comes along. I haven't worked w/ WME before though so I don't know how feasible this would be. Thoughts?
Have you tried the new Expression 4 Encoders?
http://www.microsoft.com/expression/products/EncoderPro_Overview.aspx

Resources