Even though a question of this nature sounds very similar, I am having problems in converting a jpg image to yuv in C (without using opencv).
This is what I have understood as of now, how to solve this problem :
Identify the structure of file formats for jpg and yuv. i.e what each byte in the file actually contains. This is what I think jpg format looks like.
With the above structure I tried to read a jpg file and tried to decipher its 18th and 19th bytes. I did type cast them to both char and int but I don`t get any meaningful values for width and height of the image.
Once I have read these values, I should be able to convert them from jpg to yuv. I was looking at this resource.
Appropriately, construct yuv image and write it to a (.yuv) file.
Kindly help me by pointing me to appropriate resources. I will keep updating my progress on this post. Thanks in advance.
Usually the image is already stored in YUV (or, to be more precise: YCbCr).
When reading the file, the jpeg reader usually converts YUV to RGB. Converting back will reduce quality somewhat.
In libTurboJpeg (http://libjpeg-turbo.virtualgl.org/) you can read the jpeg without color conversion. Check https://github.com/libjpeg-turbo/libjpeg-turbo/blob/master/turbojpeg.h -
it has the tjDecompressToYUV function which gives you the 3 colorspaces on 3 different output buffers.
Not sure what you have against opencv, maybe ImageMagick is acceptable to you? It is installed on most Linux distors and is available for OSX, and Windows. It has C bindings, and also a command-line version that I am showing here. So you can create an image like this:
# Create test image
convert -size 100x100 \
\( xc:red xc:lime xc:blue +append \) \
\( xc:cyan xc:magenta xc:yellow +append \) \
-append image.jpg
Now convert to YUV and write to 3 separate files:
convert image.jpg -colorspace yuv -separate bands.jpg
bands-0.jpg (Y)
bands-1.jpg (U)
bands-2.jpg(V)
Or, closer to what you ask, write all three bands YUV into a binary file:
convert image.jpg -colorspace yuv rgb:yuv.bin
Based on https://en.wikipedia.org/wiki/YUV#Y.27UV444_to_RGB888_conversion
Decoding a JPEG, well in pure C without libraries ... the following code is somewhat straightforward ...
https://bitbucket.org/Halicery/firerainbow-progressive-jpeg-decoder/src
Assuming you have the jpeg decoded to rgb using the above or a library (using a library is likely easier).
int width = (width of the image);
int height = (height of the image);
byte *mydata = (pointer to rgb pixels);
byte *cursor;
size_t byte_count = (length of the pixels .... i.e. width x height x 3);
int n;
for (cursor = mydata, n = 0; n < byte_count; cursor += 3, n += 3)
{
int red = cursor[0], green = cursor[1], blue = cursor[2];
int y = 0.299 * red + 0.587 * green + 0.114 * blue;
int u = -0.147 * red + -0.289 * green + 0.436 * blue;
int v = 0.615 * red + -0.515 * green + -0.100 * blue;
cursor[0] = y, cursor[1] = u, cursor[2] = v;
}
// At this point, the entire image has been converted to yuv ...
And write that to file ...
FILE* fout = fopen ("myfile.yuv, "wb");
if (fout) {
fwrite (mydata, 1, byte_count, fout);
fclose (fout);
}
Related
//I am trying to crop an image captured by espcam the image is in a jpg format I would like to crop it. As the image is stored as a single-dimensional array I tried to rearrange the elements in the array but no changes occurred //
I have cropped the image in RGB565 but I am struggling to understand the single-dimensional array(image buffer)
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sscb_sda = SIOD_GPIO_NUM;
config.pin_sscb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_RGB565;
config.frame_size = FRAMESIZE_SVGA;
// config.jpeg_quality = 10;
config.fb_count = 2;
esp_err_t result = esp_camera_init(&config);
if (result != ESP_OK) {
return false;
}
camera_fb_t * fb = NULL;
fb = esp_camera_fb_get();
if (!fb)
{
Serial.println("Camera capture failed");
}
the Fb buffer is a single-dimensional array I want to extract each individual RGB value.
JPG is a compressed format, meaning that your rows and columns are not corresponding to what you would see by displaying a 1:1 grid on the screen. You need to convert it to the plain RGB (or equivalents) format and then copy it.
JPG achieves compression by splitting the image into YCbCR components, using a mathematical transformation and then filtering. For additional information I refer to this page.
Luckily you can follow this tutorial to do the inverse JPEG transformation on an Arduino (tip: forget to do this in real time, unless your time constraints are very relaxed).
The idea is to use a library that converts the JPEG image into an array of data:
Using the library is fairly simple: we give it the JPEG file, and the library will start generating arrays of pixels – so called Minimum Coded Units, or MCUs for short. The MCU is a block of 16 by 8 pixels. The functions in the library will return the color value for each pixel as 16-bit color value. The upper 5 bits are the red value, the middle 6 are green and the lower 5 are blue. Now we can send these values by any sort of communication channel we like.
For your use case you won't send the data through the communication channel, but rather store it in a local array by pushing the blocks into adjacent tiles, then do the crop.
That depends on what kind of hardware (camera and board) you are using.
I'm basing this on the OV2640 camera module because it's the one I've been working with. It delivers the image to the frame buffer already encoded, so I'm guessing this might be what you are facing.
Trying to crop the image after it has been encoded can be tricky, but you might be able to instruct the camera chip to only deliver a certain part of the sensor output in the first place using a window function.
The easiest way to access this setting is to define a function to access this:
void setWindow(int resolution , int xOffset, int yOffset, int xLength, int yLength) {
sensor_t * s = esp_camera_sensor_get();
resolution = 0;
s->set_res_raw(s, resolution, 0, 0, 0, xOffset, yOffset, xLength, yLength, xLength, yLength, true, true);
}
/*
* resolution = 0 \\ 1600 x 1200
* resolution = 1 \\ 800 x 600
* resolution = 2 \\ 400 x 296
*/
where (xOffset,yOffset) is the origin of the window in pixels and (xLength,yLength) is the size of the window in pixels. Be aware that changing the resolution will effectively overwrite these settings. Otherwise this works great for me, although for some reason only if the aspect ratio of 4:3 is preserved in the window size.
Looking at the output format table for the ESP32 Camera Driver one can see that most output formats are non-jpeg. If you can handle a RAW format instead (it will be slower to save/transfer, and be MUCH larger) then that would allow you to more easily crop the image by make a copy with a couple of loops. JPEG is compressed and not easily cropped. The page linked also mentions this:
Using YUV or RGB puts a lot of strain on the chip because writing to PSRAM is not particularly fast. The result is that image data might be missing. This is particularly true if WiFi is enabled. If you need RGB data, it is recommended that JPEG is captured and then turned into RGB using fmt2rgb888 or fmt2bmp/frame2bmp
If you are using PIXFORMAT_RGB565 (which means each pixel value will be kept in TWO bytes, and the image is not jpeg compressed) and FRAMESIZE_SVGA (800x600 pixels), you should be able to access the framebuffer as a two-dimensional array if you want:
uint16_t *buffer = fb->buf;
uint16_t pxl = buffer[row * 800 + column]; // 800 is the SVGA width
// pxl now contains 5 R-bits, 6 G-bits, 5 B-bits
I'm trying to write a PNG file from an image that can be grayscale (8bits*1componant) or rgb (8bits*3componants) with libpng in C.
I read the manual and wrote this piece of code that doesn't work :-/
/* writing the image */
png_byte *row_pointers[img->height];
int h;
for (h = 0 ; h < img->height ; h++)
{
row_pointers[h] = img->data+h*img->width*image_components;
}
png_write_image(png_ptr, row_pointers);
Nothing is written into the image, and I don't understand why.
img.data points to the image datas (interlaced in the case of RGB format)
The documentation says you are supposed to use png_write_end, see "Finishing a sequential write" section in http://www.libpng.org/pub/png/libpng-1.2.5-manual.html.
There are plenty of examples out there (eg http://zarb.org/~gc/html/libpng.html)
I am trying to use the V4L2 API to capture images and put the images into an opencv Mat. The problem is my webcam only captures in YUYV (YUY2) So I need to convert to RGB24 first. Here is the complete V4L2 code that I am using.
I was able to get objects in the picture to be recognizable, but it is all pink and green, and it is stretched horizontally and distorted. I have tried many different conversion formulas and I have had the same basic pink/green distorted image. The formula used for this picture is from http://paulbourke.net/dataformats/yuv/. I am using the shotwell photo viewer on linux to view the .raw image. I couldn't get gimp to open it. I am not that knowledgable with how to save image formats, but I am assuming there has to be some kind of header but the shotwell photo viewer seemed to work. Could this possibly be reason for the incorrect image?
I am not sure if V4l2 is returning a signed or unsigned byte image which is pointed to by p. But if this were the problem woudln't my image would just be off-color? But it seems the geometry is distorted too. I believe I took care of the casting to and from floating point properly.
Could someone help me understand
how to find out the underlying type contained in the *void p variable
the proper formula for converting from YUYV to RGB24 including explanations of which types to use
could saving the image with no format (headers) and viewing with Shotwell be the problem?
is there an easy way to save an RGB24 image properly.
general debugging tips
Thanks
static unsigned char *bgr_image;
static void process_image(void *p, int size)
{
frame_number++;
char filename[15];
sprintf(filename, "frame-%d.raw", frame_number);
FILE *fp=fopen(filename,"wb");
int i;
float y1, y2, u, v;
char * bgr_p = bgr_image;
unsigned char * p_tmp = (unsigned char *) p;
for (i=0; i < size; i+=4) {
y1 = p_tmp[i];
u = p_tmp[i+1];
y2 = p_tmp[i+2];
v = p_tmp[i+3];
bgr_p[0] = (y1 + 1.371*(u - 128.0));
bgr_p[1] = (y1 - 0.698*(u - 128.0) - 0.336*(v - 128.0));
bgr_p[2] = (y1 + 1.732*(v - 128.0));
bgr_p[3] = (y2 + 1.371*(v - 128.0));
bgr_p[4] = (y2 - 0.698*(v - 128.0) - 0.336*(u - 128.0));
bgr_p[5] = (y2 + 1.732*(u - 128.0));
bgr_p+=6;
}
fwrite(bgr_image, size, 1, fp);
fflush(fp);
fclose(fp);
}
First, you must understand with what type of YUV422 you are working.
PIX_FMT_YUYV422, ///< packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr
PIX_FMT_UYVY422, ///< packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1
Try replacing y1, u, y2, and v accordingly, but you maybe be not dealing with YUV422 at all, the picture could be a planar, instead of a packed format you are expecting?
I think its better for you to download IrfanViewer, which has a raw yuv file open functionality and try picking the correct values to have a correctly decoded image to find what type of data you are using.
do not try to re-invent the wheel. lots of people have written colorspace-converters and chances are high that your implementation (even if it works) is not the "optimal" one (e.g. being slower than necessary).
the canonical way to deal with V4L2 devices of any colourspace is to use the libv4l-library, which will transparently convert the cameras native colorspace to once of BGR24, RGB24 and YUV420 (if you desire that, which i think is true).
as for saving the image, again use what is already there. personally, i would use imagemagick to save a frame in a "proper" format that can be read by any imageviewer (png or tiff, if quality matters)
I can't make sense of the BMP format, I know its supposed to be simple, but somehow I'm missing something. I thought it was 2 headers followed by the actual bytes defining the image, but the numbers do not add up.
For instance, I'm simply trying to load this BMP file into memory (640x480 8bpp grayscale) and just write it back to a different file. From what I understand, there are two different headers BITMAPFILEHEADER and BITMAPINFOHEADER. The BITMAPFILEHEADER is 14 bytes, and the BITMAPINFOHEADER is 40 bytes (this one depends on the BMP, how can I tell that's another story). Anyhow, the BITMAPFILEHEADER, through its parameter bfOffBits says that the bitmap bits start at offset 1078. This means that there are 1024 ( 1078 - (40+14) ) other bytes, carrying more information. What are those bytes, and how do I read them, this is the problem. Or is there a more correct way to load a BMP and write it to disk ?
For reference here is the code I used ( I'm doing all of this under windows btw.)
#include <windows.h>
#include <iostream>
#include <stdio.h>
HANDLE hfile;
DWORD written;
BITMAPFILEHEADER bfh;
BITMAPINFOHEADER bih;
int main()
hfile = CreateFile("image.bmp",GENERIC_READ,FILE_SHARE_READ,NULL,OPEN_EXISTING,FILE_ATTRIBUTE_NORMAL,NULL);
ReadFile(hfile,&bfh,sizeof(bfh),&written,NULL);
ReadFile(hfile,&bih,sizeof(bih),&written,NULL);
int imagesize = bih.biWidth * bih.biHeight;
image = (unsigned char*) malloc(imagesize);
ReadFile(hfile,image,imagesize*sizeof(char),&written,NULL);
CloseHandle(hfile);
I'm then doing the exact opposite to write to a file,
hfile = CreateFile("imageout.bmp",GENERIC_WRITE,FILE_SHARE_WRITE,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL);
WriteFile(hfile,&bfh,sizeof(bfh),&written,NULL);
WriteFile(hfile,&bih,sizeof(bih),&written,NULL);
WriteFile(hfile,image,imagesize*sizeof(char),&written,NULL);
CloseHandle(hfile);
Edit --- Solved
Ok so I finally got it right, it wasn't really complicated after all. As Viktor pointed out, these 1024 bytes represent the color palette.
I added the following to my code:
RGBQUAD palette[256];
// [...] previous declarations [...] int main() [...] then read two headers
ReadFile(hfile,palette,sizeof(palette),&written,NULL);
And then when I write back I added the following,
WriteFile(hfile,palette,sizeof(palette),&written,NULL);
"What are those bytes, and how do I read them, this is the problem."
Those bytes are Palette (or ColorTable in .BMP format terms), as Retired Ninja mentioned in the comment. Basically, it is a table which specifies what color to use for each 8bpp value encountered in the bitmap data.
For greyscale the palette is trivial (I'm not talking about color models and RGB -> greyscale conversion):
for(int i = 0 ; i < 256 ; i++)
{
Palette[i].R = i;
Palette[i].G = i;
Palette[i].B = i;
}
However, there's some padding in the ColorTable's entries, so it takes 4 * 256 bytes and not 256 * 3 needed by you. The fourth component in the ColorTable's entry (RGBQUAD Struct) is not the "alpha channel", it is just something "reserved". See the MSDN on RGBQUAD (MSDN, RGBQUAD).
The detailed format description can be found on the wikipedia page:Wiki, bmp format
There's also this linked question on SO with RGBQUAD structure: Writing BMP image in pure c/c++ without other libraries
As Viktor says in his answer, those bits are the pallete. As for how should you read them, take a look at this header-only bitmap class. In particular look at references to ColorTable for how it treats the pallette bit depending on the type of BMP is it was given.
I'm on Ubuntu Intrepid and I'm using jpeglib62 6b-14. I was working on some code, which only gave a black screen with some garbled output at the top when I tried to run it. After a few hours of debugging I got it down to pretty much the JPEG base, so I took the example code, wrote a little piece of code around it and the output was exactly the same.
I'm convinced jpeglib is used in a lot more places on this system and it's simply the version from the repositories so I'm hesitant to say that this is a bug in jpeglib or the Ubuntu packaging.
I put the example code below (most comments stripped). The input JPEG file is an uncompressed 640x480 file with 3 channels, so it should be 921600 bytes (and it is). The output image is JFIF and around 9000 bytes.
If you could help me with even a hint, I'd be very grateful.
Thanks!
#include <stdio.h>
#include <stdlib.h>
#include "jpeglib.h"
#include <setjmp.h>
int main ()
{
// read data
FILE *input = fopen("input.jpg", "rb");
JSAMPLE *image_buffer = (JSAMPLE*) malloc(sizeof(JSAMPLE) * 640 * 480 * 3);
if(input == NULL or image_buffer == NULL)
exit(1);
fread(image_buffer, 640 * 3, 480, input);
// initialise jpeg library
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
// write to foo.jpg
FILE *outfile = fopen("foo.jpg", "wb");
if (outfile == NULL)
exit(1);
jpeg_stdio_dest(&cinfo, outfile);
// setup library
cinfo.image_width = 640;
cinfo.image_height = 480;
cinfo.input_components = 3; // 3 components (R, G, B)
cinfo.in_color_space = JCS_RGB; // RGB
jpeg_set_defaults(&cinfo); // set defaults
// start compressing
int row_stride = 640 * 3; // number of characters in a row
JSAMPROW row_pointer[1]; // pointer to the current row data
jpeg_start_compress(&cinfo, TRUE); // start compressing to jpeg
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer[0] = & image_buffer[cinfo.next_scanline * row_stride];
(void) jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
// clean up
fclose(outfile);
jpeg_destroy_compress(&cinfo);
}
You're reading a JPEG file into memory (without decompressing it) and writing out that buffer as if it were uncompressed, that's why you're getting garbage. You need to decompress the image first before you can feed it into the JPEG compressor.
In other words, the JPEG compressor assumes that its input is raw pixels.
You can convert your input image into raw RGB using ImageMagick:
convert input.jpg rgb:input.raw
It should be exactly 921600 bytes in size.
EDIT: Your question is misleading when you state that your input JPEG file in uncompressed. Anyway, I compiled your code and it works fine, compresses the image correctly. If you can upload the file you're using as input, it might be possible to debug further. If not, I suggest you test your program using an image created from a known JPEG using ImageMagick:
convert some_image_that_is_really_a_jpg.jpg -resize 640x480! rgb:input.jpg
You are reading the input file into memmory compressed and then you are recompressing it before righting to file. You need to decompress the image_buffer before compressing it again. Or alternativly instead of reading in a jpeg read a .raw image
What exactly do you mean by "The input JPEG file is an uncompressed"? Jpegs are all compressed.
In your code, it seems that in the loop you give one row of pixels to libjpeg and ask it to compress it. It doesn't work that way. libjpeg has to have at least 8 rows to start compression (sometimes even more, depending on parameters). So it's best to leave libjpeg to control the input buffer and don't do its job for it.
I suggest you read how cjpeg.c does its job. The easiest way I think is to put your data in a raw type known by libjpeg (say, BMP), and use libjpeg to read the BMP image into its internal representation and compress from there.