Getting width and height from jpeg image file - c

I wrote this function to given filename(a jpeg file) shall print its size in pixels, w and h. According to tutorial that I'm reading,
//0xFFC0 is the "Start of frame" marker which contains the file size
//The structure of the 0xFFC0 block is quite simple [0xFFC0][ushort
length][uchar precision][ushort x][ushort y]
So, I wrote this struct
#pragma pack(1)
struct imagesize {
unsigned short len; /* 2-bytes */
unsigned char c; /* 1-byte */
unsigned short x; /* 2-bytes */
unsigned short y; /* 2-bytes */
}; //sizeof(struct imagesize) == 7
#pragma pack()
and then:
#define SOF 0xC0 /* start of frame */
void jpeg_test(const char *filename)
{
FILE *fh;
unsigned char buf[4];
unsigned char b;
fh = fopen(filename, "rb");
if(fh == NULL)
fprintf(stderr, "cannot open '%s' file\n", filename);
while(!feof(fh)) {
b = fgetc(fh);
if(b == SOF) {
struct imagesize img;
#if 1
ungetc(b, fh);
fread(&img, 1, sizeof(struct imagesize), fh);
#else
fread(buf, 1, sizeof(buf), fh);
int w = (buf[0] << 8) + buf[1];
int h = (buf[2] << 8) + buf[3];
img.x = w;
img.y = h;
#endif
printf("%dx%d\n",
img.x,
img.y);
break;
}
}
fclose(fh);
}
But I'm getting 520x537 instead of 700x537, that's the real size.
Can someone point and explain where I'm wrong?

A JPEG file consists of a number of sections. Each section starts with 0xff, followed by 1-byte section identifier, followed by number of data bytes in the section (in 2 bytes), followed by the data bytes. The sequence 0xffc0, or any other 0xff-- two-byte sequence, inside the data byte sequence, has no significance and does not mark a start of a section.
As an exception, the very first section does not contain any data or length.
You have to read each section header in turn, parse the length, then skip corresponding number of bytes before starting to read next section. You cannot just search for 0xffc0, let alone just 0xc0, without regard to the section structure.
Source.

There are several issues to consider, depending on how "universal" you want your program to be. First, I recommend using libjpeg. A good JPEG parser can be a bit gory, and this library does a lot of the heavy lifting for you.
Next, to clarify n.m.'s statement, you have no guarantee that the first 0xFFCO pair is the SOF of interest. I've found that modern digital cameras like to load up the JPEG header with a number of APP0 and APP1 blocks, which can mean that the first SOF marker you encounter during a sequential read may actually be the image thumbnail. This thumbnail is usually stored in JPEG format (as far as I have observed, anyway) and is thus equipped with its own SOF marker. Some cameras and/or image editing software can include an image preview that is larger than a thumbnail (but smaller than the actual image). This preview image is usually JPEG and again has it's own SOF marker. It's not unusual for the image SOF marker to be the last one.
Most (all?) modern digital cameras also encode the image attributes in the EXIF tags. Depending upon your application requirements, this might be the most straightforward, unambiguous way to obtain the image size. The EXIF standard document will tell you all you need to know about writing an EXIF parser. (libExif is available, but it never fit my applications.) Regardless, if you roll your own EXIF or rely on a library, there are some good tools for inspecting EXIF data. jhead is very good tool, and I've also had good luck with ExifTool.
Lastly, pay attention to endianess. SOF and other standard JPEG markers are big-endian, but EXIF markers may vary.

As you mention, the spec states that the marker is 0xFFC0. But it seems that you only ever look for a single byte with the code if (b==SOF)
If you open the file up with a hex editor, and search for 0xFFC0 you'll find the marker. Now as long as the first 0xC0 in the file is the marker, your code will work. If it's not though, you get all sorts of undefined behaviour.
I'd be inclined to read the whole file first. It's a jpg right, how big could it be? (thought this is important if on an embedded system) Then just step through it looking for the first char of my marker. When found, I'd use a memcmp to see if the next 3bytes mathed the rest of the sig.

Related

Decoding TIFF LZW codes not yet in the dictionary

I made a decoder of LZW-compressed TIFF images, and all the parts work, it can decode large images at various bit depths with or without horizontal prediction, except in one case. While it decodes files written by most programs (like Photoshop and Krita with various encoding options) fine, there's something very strange about the files created by ImageMagick's convert, it produces LZW codes that aren't yet in the dictionary, and I don't know how to handle it.
Most of the time the 9 to 12-bit code in the LZW stream that isn't yet in the dictionary is the next one that my decoding algorithm will try to put in the dictionary (which I'm not sure should be a problem although my algorithm fails on an image that contains such cases), but at times it can even be hundreds of codes into the future. In one case the first code after the clear code (256) is 364, which seems quite impossible given that the clear code clears my dictionary of all codes 258 and above, in another case the code is 501 when my dictionary only goes up to 317!
I have no idea how to deal with it, but it seems that I'm the only one with this problem, the decoders in other programs load such images fine. So how do they do it?
Here's the core of my decoding algorithm, obviously due to how much code is involved I can't provide complete compilable code in a compact manner, but since this is a matter of algorithmic logic this should be enough. It follows closely the algorithm described in the official TIFF specification (page 61), in fact most of the spec's pseudo code is in the comments.
void tiff_lzw_decode(uint8_t *coded, buffer_t *dec)
{
buffer_t word={0}, outstring={0};
size_t coded_pos; // position in bits
int i, new_index, code, maxcode, bpc;
buffer_t *dict={0};
size_t dict_as=0;
bpc = 9; // starts with 9 bits per code, increases later
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258; // index at which new dict entries begin
coded_pos = 0; // bit position
lzw_dict_init(&dict, &dict_as);
while ((code = get_bits_in_stream(coded, coded_pos, bpc)) != 257) // while ((Code = GetNextCode()) != EoiCode)
{
coded_pos += bpc;
if (code >= new_index)
printf("Out of range code %d (new_index %d)\n", code, new_index);
if (code == 256) // if (Code == ClearCode)
{
lzw_dict_init(&dict, &dict_as); // InitializeTable();
bpc = 9;
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258;
code = get_bits_in_stream(coded, coded_pos, bpc); // Code = GetNextCode();
coded_pos += bpc;
if (code == 257) // if (Code == EoiCode)
break;
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else if (code < 4096)
{
if (dict[code].len) // if (IsInTable(Code))
{
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
lzw_add_to_dict(&dict, &dict_as, new_index, 0, word.buf, word.len, &bpc);
lzw_add_to_dict(&dict, &dict_as, new_index, 1, dict[code].buf, 1, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else
{
clear_buf(&outstring);
append_buf(&outstring, &word);
bufwrite(&outstring, word.buf, 1); // OutString = StringFromCode(OldCode) + FirstChar(StringFromCode(OldCode));
append_buf(dec, &outstring); // WriteString(OutString);
lzw_add_to_dict(&dict, &dict_as, new_index, 0, outstring.buf, outstring.len, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
}
}
free_buf(&word);
free_buf(&outstring);
for (i=0; i < dict_as; i++)
free_buf(&dict[i]);
free(dict);
}
As for the results that my code produces in such situations it's quite clear from how it looks that it's only those few codes that are badly decoded, everything before and after is properly decoded, but obviously in most cases the subsequent image after one of these mystery future codes is ruined by virtue of shifting the rest of the decoded bytes by a few places. That means that my reading of the 9 to 12-bit code stream is correct, so this really means that I see a 364 code right after a 256 dictionary-clearing code.
Edit: Here's an example file that contains such weird codes. I've also found a small TIFF LZW loading library that suffers from the same problem, it crashes where my loader finds the first weird code in this image (code 3073 when the dictionary only goes up to 2051). The good thing is that since it's a small library you can test it with the following code:
#include "loadtiff.h"
#include "loadtiff.c"
void loadtiff_test(char *path)
{
int width, height, format;
floadtiff(fopen(path, "rb"), &width, &height, &format);
}
And if anyone insists on diving into my code (which should be unnecessary, and it's a big library) here's where to start.
The bogus codes come from trying to decode more than we're supposed to. The problem is that a LZW strip may sometimes not end with an End-of-Information 257 code, so the decoding loop has to stop when a certain number of decoded bytes have been output. That number of bytes per strip is determined by the TIFF tags ROWSPERSTRIP * IMAGEWIDTH * BITSPERSAMPLE / 8, and if PLANARCONFIG is 1 (which means interleaved channels as opposed to planar), by multiplying it all by SAMPLESPERPIXEL. So on top of stopping the decoding loop when a code 257 is encountered the loop must also be stopped after that count of decoded bytes has been reached.

Simple way to write uncompressed JPEG or PNG image?

I have a raw image in memory, organized as an array of 32-bit RGB values. I'd like to write that out as quickly as possible to an image file so that I can free up the memory. Is there a way to do the following to write either an uncompressed JPEG, PNG or TIFF image? Or perhaps I should say, what image formats are compatible with this approach to writing raw pixel data? Note that the top-left pixel is in the first 4 bytes of the pixel data.
void write_image(uint32_t *pixels, int width, int height) {
FILE *file=fopen("file.jpg","wb");
write_header (file, width, height);
fwrite (pixels,1,width*height*4,file); // write raw pixel data
write_end (file);
fclose(file);
}
There seem to be two different issues or motivations on your part.
First, there is the desire to write an image in some uncompressed format to (presumably) gain speed. PNG and JPEG are compressed formats, though you can instruct the encoder (at least in some PNG implementations) to use the "no compress" setting.
However: a) there are few scenarios in which that "optimization" would make a critical difference, the normal compressors are quite quick.
b) Even when encoding using some compression_level=0 setting, you are still encoding the image in a particular format (typically a header, to start with). What leads us to the second motivation.
Second, it seems that you want to avoid not exactly (only) the compression, but the encoding. That is, you want to write the pixels in your unencoded ("raw") format. IN that case, of course, you cannot write a PNG or JPEG image. You can use your own or some standard RAW format, or the quasi-raw BMP format. But you still need to take care of how the pixels are organized in memory (for example, one byte per channel? RGB? BGR? RGBA ?) and perhaps some other issues (for example, BMP requires that the bytes per line are multiple of 4).
Uncompressed JPEG and PNG are non-trivial, and the results would likely have portability issues. Your simplest option might be TGA:
TGA was designed to store rasterized images that could be quickly loaded for display into frame buffers. It has a very simple 18-byte header, then raw pixel data. It's streamable, meaning image data can be written as generated, provided it's rasterized. There is no requirement for length/offset/CRC tables. The biggest limitation is samples must be in rasterized BGR order with an optional fourth channel (usually alpha but can be anything). The width/height fields in the header must be 16-bit little-endian. TGA format isn't as widely supported as GIF/JPEG/PNG, but you should find some viewers capable of rendering it.
For BGR data, the header would probably be (hexadecimal values):
0x00 0x00 0x02 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
(width%256) (width/256) (height%256) (height/256) 0x18 0x20
For BGRA data, the header would probably be:
0x00 0x00 0x02 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
(width%256) (width/256) (height%256) (height/256) 0x20 0x28
If your data is not in BGR/BGRA order and you can't or don't want to convert it, my second recommendation would be TIFF:
TIFF design is focused on storing rasterized scans of documents and uses a more complex TTLV-style header. It is not as streamable and requires offset tables, but you can get around this with brute-force precalculation and by defining the entire image as a single strip. Once you get the header done, then you can stream the raw pixel data in rasterized RGB or RGBA order as it's generated. TIFF can also support either big-endian or little-endian and up to 16-bits per sample. It is often the output format of choice for scanners, so viewer/editor support is fairly common.
The first four bytes are a magic number (byte order and TIFF file version), followed by a 4-byte offset of the image file directory (IFD). Usually this offset is just 8 bytes into the file to place the IFD immediately next. The IFD is a count value (N) followed by a table of N 12-byte entries in the form of tag, type, count, and value/offset. The rest of the details are too much to describe here but you can read the specification for more info.
TGA format is a good option for this. It is just a simple header followed by your data. 24 or 32 bits.
https://en.wikipedia.org/wiki/Truevision_TGA
I'd recommend svpng for debugging purposes.
https://github.com/miloyip/svpng
A minimalistic C function for saving RGB/RGBA image into uncompressed PNG.
Features
RGB or RGBA color format
Single function
32 lines of ANSI C code
No dependency
Customizable output stream (default with C file descriptor)
Example
void test_rgba(void) {
unsigned char rgba[256 * 256 * 4], *p = rgba;
unsigned x, y;
FILE* fp = fopen("rgba.png", "wb");
for (y = 0; y < 256; y++)
for (x = 0; x < 256; x++) {
*p++ = (unsigned char)x; /* R */
*p++ = (unsigned char)y; /* G */
*p++ = 128; /* B */
*p++ = (unsigned char)((x + y) / 2); /* A */
}
svpng(fp, 256, 256, rgba, 1);
fclose(fp);
}

gnu FORTRAN unformatted file record markers stored as 64-bit width?

I have a legacy code and some unformatted data files that it reads, and it worked with gnu-4.1.2. I don't have access to the method that originally generated these data files. When I compile this code with a newer gnu compiler (gnu-4.7.2) and attempt to load the old data files on a different computer, it is having difficulty reading them. I start by opening the file and reading in the first record which consists of three 32-bit integers:
open(unit, file='data.bin', form='unformatted', status='old')
read(unit) x,y,z
I am expecting these three integers here to describe x,y,z spans so that next it can load a 3D matrix of float values with those same dimensions. However, instead it's loading a 0 for the first value, then the next two are offset.
Expecting:
x=26, y=127, z=97 (1A, 7F, 61 in hex)
Loaded:
x=0, y=26, z=127 (0, 1A, 7F in hex)
When I checked the data file in a hex editor, I think I figured out what was happening.
The first record marker in this case has a value of 12 (0C in hex) since it's reading three integers at 4 bytes each. This marker is stored both before and after the record. However, I notice that the 32bits immediately after each record marker is 00000000. So either the record markers are treated as 64bit integers (little-Endian) or there is a 32-bit zero padding after each record marker. Either way, the code generated with the new compiler is reading the record markers as 32-bit integers and not expecting any padding. This effectively intrudes/corrupts the data being read in.
Is there an easy way to fix this non-portable issue? The old and new hardware are 64 bit architecture and so is the executable I compiled. If I try to use the older compiler version again will it solve the problem, or is it hardware dependent? I'd prefer to use the newer compilers because they are more efficient, and I really don't want to edit the source code to open all the files as access='stream' and manually read in a trailing 0 integer after each record marker, both before and after each record.
P.S. I could probably write a C++ code to alter the data files and remove these zero paddings if there is no easier alternative.
See the -frecord-marker= option in the gfortran manual. With -frecord-marker=8 you can read the old style unformatted sequential files produced by older versions of gfortran.
Seeing as how Fortran doesn't have a standardization on this, I opted to convert the data files to a new format that uses 32-bit wide record lengths instead of 64-bit wide. In case anyone needs to do this in the future I've included some Visual C++ code here that worked for me and should be easily modifiable to C or another language. I have also uploaded a Windows executable (fortrec.zip) here.
CFile OldFortFile, OutFile;
const int BUFLEN = 1024*20;
char pbuf[BUFLEN];
int i, iIn, iRecLen, iRecLen2, iLen, iRead, iError = 0;
CString strInDir = "C:\folder\";
CString strIn = "file.dat";
CString strOutDir = "C:\folder\fortnew\"
system("mkdir \"" + strOutDir + "\""); //create a subdir to hold the output files
strIn = strInDir + strIn;
strOut = strOutDir + strIn;
if(OldFortFile.Open(strIn,CFile::modeRead|CFile::typeBinary)) {
if(OutFile.Open(strOut,CFile::modeCreate|CFile::modeWrite|CFile::typeBinary)) {
while(true) {
iRead = OldFortFile.Read(&iRecLen, sizeof(iRecLen)); //Read the record's raw data
if (iRead < sizeof(iRecLen)) //end of file reached
break;
OutFile.Write(&iRecLen, sizeof(iRecLen));//Write the record's raw data
OldFortFile.Read(&iIn, sizeof(iIn));
if (iIn != 0) {//this is the padding we need to ignore, ensure it's always zero
//Padding not found
iError++;
break;
}
i = iRecLen;
while (i > 0) {
iLen = (i > BUFLEN) ? BUFLEN : i;
OldFortFile.Read(&pbuf[0], iLen);
OutFile.Write(&pbuf[0], iLen);
i -= iLen;
}
if (i != 0) { //Buffer length mismatch
iError++;
break;
}
OldFortFile.Read(&iRecLen2, sizeof(iRecLen2));
if (iRecLen != iRecLen2) {//ensure we have reached the end of the record proeprly
//Record length mismatch
iError++;
break;
}
OutFile.Write(&iRecLen2, sizeof(iRecLen));
OldFortFile.Read(&iIn, sizeof(iIn));
if (iIn != 0) {//this is the padding we need to ignore, ensure it's always zero
//Padding not found
break;
}
}
OutFile.Close();
OldFortFile.Close();
}
else { //Could not create the ouput file.
OldFortFile.Close();
return;
}
}
else { //Could not open the input file
}
if (iError == 0)
//File successfully converted
else
//Encountered error

Loading an 8bpp grayscale BMP in C

I can't make sense of the BMP format, I know its supposed to be simple, but somehow I'm missing something. I thought it was 2 headers followed by the actual bytes defining the image, but the numbers do not add up.
For instance, I'm simply trying to load this BMP file into memory (640x480 8bpp grayscale) and just write it back to a different file. From what I understand, there are two different headers BITMAPFILEHEADER and BITMAPINFOHEADER. The BITMAPFILEHEADER is 14 bytes, and the BITMAPINFOHEADER is 40 bytes (this one depends on the BMP, how can I tell that's another story). Anyhow, the BITMAPFILEHEADER, through its parameter bfOffBits says that the bitmap bits start at offset 1078. This means that there are 1024 ( 1078 - (40+14) ) other bytes, carrying more information. What are those bytes, and how do I read them, this is the problem. Or is there a more correct way to load a BMP and write it to disk ?
For reference here is the code I used ( I'm doing all of this under windows btw.)
#include <windows.h>
#include <iostream>
#include <stdio.h>
HANDLE hfile;
DWORD written;
BITMAPFILEHEADER bfh;
BITMAPINFOHEADER bih;
int main()
hfile = CreateFile("image.bmp",GENERIC_READ,FILE_SHARE_READ,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,NULL);
ReadFile(hfile,&bfh,sizeof(bfh),&written,NULL);
ReadFile(hfile,&bih,sizeof(bih),&written,NULL);
int imagesize = bih.biWidth * bih.biHeight;
image = (unsigned char*) malloc(imagesize);
ReadFile(hfile,image,imagesize*sizeof(char),&written,NULL);
CloseHandle(hfile);
I'm then doing the exact opposite to write to a file,
hfile = CreateFile("imageout.bmp",GENERIC_WRITE,FILE_SHARE_WRITE,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL);
WriteFile(hfile,&bfh,sizeof(bfh),&written,NULL);
WriteFile(hfile,&bih,sizeof(bih),&written,NULL);
WriteFile(hfile,image,imagesize*sizeof(char),&written,NULL);
CloseHandle(hfile);
Edit --- Solved
Ok so I finally got it right, it wasn't really complicated after all. As Viktor pointed out, these 1024 bytes represent the color palette.
I added the following to my code:
RGBQUAD palette[256];
// [...] previous declarations [...] int main() [...] then read two headers
ReadFile(hfile,palette,sizeof(palette),&written,NULL);
And then when I write back I added the following,
WriteFile(hfile,palette,sizeof(palette),&written,NULL);
"What are those bytes, and how do I read them, this is the problem."
Those bytes are Palette (or ColorTable in .BMP format terms), as Retired Ninja mentioned in the comment. Basically, it is a table which specifies what color to use for each 8bpp value encountered in the bitmap data.
For greyscale the palette is trivial (I'm not talking about color models and RGB -> greyscale conversion):
for(int i = 0 ; i < 256 ; i++)
{
Palette[i].R = i;
Palette[i].G = i;
Palette[i].B = i;
}
However, there's some padding in the ColorTable's entries, so it takes 4 * 256 bytes and not 256 * 3 needed by you. The fourth component in the ColorTable's entry (RGBQUAD Struct) is not the "alpha channel", it is just something "reserved". See the MSDN on RGBQUAD (MSDN, RGBQUAD).
The detailed format description can be found on the wikipedia page:Wiki, bmp format
There's also this linked question on SO with RGBQUAD structure: Writing BMP image in pure c/c++ without other libraries
As Viktor says in his answer, those bits are the pallete. As for how should you read them, take a look at this header-only bitmap class. In particular look at references to ColorTable for how it treats the pallette bit depending on the type of BMP is it was given.

Jpeglib code gives garbled output, even the bundled example code?

I'm on Ubuntu Intrepid and I'm using jpeglib62 6b-14. I was working on some code, which only gave a black screen with some garbled output at the top when I tried to run it. After a few hours of debugging I got it down to pretty much the JPEG base, so I took the example code, wrote a little piece of code around it and the output was exactly the same.
I'm convinced jpeglib is used in a lot more places on this system and it's simply the version from the repositories so I'm hesitant to say that this is a bug in jpeglib or the Ubuntu packaging.
I put the example code below (most comments stripped). The input JPEG file is an uncompressed 640x480 file with 3 channels, so it should be 921600 bytes (and it is). The output image is JFIF and around 9000 bytes.
If you could help me with even a hint, I'd be very grateful.
Thanks!
#include <stdio.h>
#include <stdlib.h>
#include "jpeglib.h"
#include <setjmp.h>
int main ()
{
// read data
FILE *input = fopen("input.jpg", "rb");
JSAMPLE *image_buffer = (JSAMPLE*) malloc(sizeof(JSAMPLE) * 640 * 480 * 3);
if(input == NULL or image_buffer == NULL)
exit(1);
fread(image_buffer, 640 * 3, 480, input);
// initialise jpeg library
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
// write to foo.jpg
FILE *outfile = fopen("foo.jpg", "wb");
if (outfile == NULL)
exit(1);
jpeg_stdio_dest(&cinfo, outfile);
// setup library
cinfo.image_width = 640;
cinfo.image_height = 480;
cinfo.input_components = 3; // 3 components (R, G, B)
cinfo.in_color_space = JCS_RGB; // RGB
jpeg_set_defaults(&cinfo); // set defaults
// start compressing
int row_stride = 640 * 3; // number of characters in a row
JSAMPROW row_pointer[1]; // pointer to the current row data
jpeg_start_compress(&cinfo, TRUE); // start compressing to jpeg
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer[0] = & image_buffer[cinfo.next_scanline * row_stride];
(void) jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
// clean up
fclose(outfile);
jpeg_destroy_compress(&cinfo);
}
You're reading a JPEG file into memory (without decompressing it) and writing out that buffer as if it were uncompressed, that's why you're getting garbage. You need to decompress the image first before you can feed it into the JPEG compressor.
In other words, the JPEG compressor assumes that its input is raw pixels.
You can convert your input image into raw RGB using ImageMagick:
convert input.jpg rgb:input.raw
It should be exactly 921600 bytes in size.
EDIT: Your question is misleading when you state that your input JPEG file in uncompressed. Anyway, I compiled your code and it works fine, compresses the image correctly. If you can upload the file you're using as input, it might be possible to debug further. If not, I suggest you test your program using an image created from a known JPEG using ImageMagick:
convert some_image_that_is_really_a_jpg.jpg -resize 640x480! rgb:input.jpg
You are reading the input file into memmory compressed and then you are recompressing it before righting to file. You need to decompress the image_buffer before compressing it again. Or alternativly instead of reading in a jpeg read a .raw image
What exactly do you mean by "The input JPEG file is an uncompressed"? Jpegs are all compressed.
In your code, it seems that in the loop you give one row of pixels to libjpeg and ask it to compress it. It doesn't work that way. libjpeg has to have at least 8 rows to start compression (sometimes even more, depending on parameters). So it's best to leave libjpeg to control the input buffer and don't do its job for it.
I suggest you read how cjpeg.c does its job. The easiest way I think is to put your data in a raw type known by libjpeg (say, BMP), and use libjpeg to read the BMP image into its internal representation and compress from there.

Resources