Using OpenCV Mat images with Intel IPP? - c

I've recently started using Intel Performance Primitives (IPP) for image processing. For those who haven't heard of IPP, think of IPP as the analogue of MKL for image processing instead of linear algebra.
I've already implemented a somewhat complicated vision system in OpenCV, and I'd like to swap out some of the OpenCV routines (e.g. convolution and FFT) for faster IPP routines. My OpenCV code always uses the cv::Mat image data structure. However, based on the IPP code samples, it seems that IPP prefers the CIppiImage data structure.
My system does several image transformations in OpenCV, then I want to do a couple of things in IPP, then do more work in OpenCV. Here's a naive way to make OpenCV and IPP play nicely together:
cv::Mat = load original image
use OpenCV to do some work on cv::Mat
write cv::Mat to file
CIppiImage = read cv::Mat from file //for IPP
use IPP to do some work on CIppiImage
write CIppiImage to file
cv::Mat = read CIppiImage from file
use OpenCV to do more work on cv::Mat
write final image to file
However, this is kind of tedious, and reading/writing files probably adds to the overall execution time.
I'm trying to make it more seamless to alternate between OpenCV and IPP in an image processing program. Here are a couple of things that could solve the problem:
Is there a one-liner that would convert a cv::Mat to CIppiImage and vice versa?
I am pretty familiar with the cv::Mat implementation details, but I don't know much about CIppiImage. Do cv::Mat and CIppiImage have the same data layout? If so, could I do something similar to the following cast? CIppiImage cimg = (CIppiImage)(&myMat.data[0])?

There's a clean way to pass OpenCV data into an IPP function.
If we have an OpenCV Mat, we can cast *Mat.data[0] to an const Ipp<type>*. For example, if we're dealing with 8-bit unsigned char (8u) data, we can plug (const Ipp8u*)&img.data[0] into an IPP function. Here's an example using the ippiFilter function with the typical Lena image:
Mat img = imread("./Lena.pgm"); //OpenCV 8U_C1 image
Mat outImg = img.clone(); //allocate space for convolution results
int step = img.cols; //pitch
const Ipp32s kernel[9] = {-1, 0, 1, -1, 0, 1, -1, 0, 1};
IppiSize kernelSize = {3,3};
IppiSize dstRoiSize = {img.cols - kernelSize.width + 1, img.rows - kernelSize.height + 1};
IppiPoint anchor = {2,2};
int divisor = 1;
IppStatus status = ippiFilter_8u_C1R((const Ipp8u*)&img.data[0], step,
(Ipp8u*)&outImg.data[0], step, dstRoiSize,
kernel, kernelSize, anchor, divisor);
When I write outImg (from the above code) to a file, it gives the expected result:
This matches the result I got when I ran the Nvidia version, nppiFilter, with the same parameters:
I mentioned a structure called CIppiImage in the original question. CIppiImage just a simple wrapper for an array.

Related

is there any way to crop an jpg image captured by esp cam?

//I am trying to crop an image captured by espcam the image is in a jpg format I would like to crop it. As the image is stored as a single-dimensional array I tried to rearrange the elements in the array but no changes occurred //
I have cropped the image in RGB565 but I am struggling to understand the single-dimensional array(image buffer)
camera_config_t config;
config.ledc_channel = LEDC_CHANNEL_0;
config.ledc_timer = LEDC_TIMER_0;
config.pin_d0 = Y2_GPIO_NUM;
config.pin_d1 = Y3_GPIO_NUM;
config.pin_d2 = Y4_GPIO_NUM;
config.pin_d3 = Y5_GPIO_NUM;
config.pin_d4 = Y6_GPIO_NUM;
config.pin_d5 = Y7_GPIO_NUM;
config.pin_d6 = Y8_GPIO_NUM;
config.pin_d7 = Y9_GPIO_NUM;
config.pin_xclk = XCLK_GPIO_NUM;
config.pin_pclk = PCLK_GPIO_NUM;
config.pin_vsync = VSYNC_GPIO_NUM;
config.pin_href = HREF_GPIO_NUM;
config.pin_sscb_sda = SIOD_GPIO_NUM;
config.pin_sscb_scl = SIOC_GPIO_NUM;
config.pin_pwdn = PWDN_GPIO_NUM;
config.pin_reset = RESET_GPIO_NUM;
config.xclk_freq_hz = 20000000;
config.pixel_format = PIXFORMAT_RGB565;
config.frame_size = FRAMESIZE_SVGA;
// config.jpeg_quality = 10;
config.fb_count = 2;
esp_err_t result = esp_camera_init(&config);
if (result != ESP_OK) {
return false;
}
camera_fb_t * fb = NULL;
fb = esp_camera_fb_get();
if (!fb)
{
Serial.println("Camera capture failed");
}
the Fb buffer is a single-dimensional array I want to extract each individual RGB value.
JPG is a compressed format, meaning that your rows and columns are not corresponding to what you would see by displaying a 1:1 grid on the screen. You need to convert it to the plain RGB (or equivalents) format and then copy it.
JPG achieves compression by splitting the image into YCbCR components, using a mathematical transformation and then filtering. For additional information I refer to this page.
Luckily you can follow this tutorial to do the inverse JPEG transformation on an Arduino (tip: forget to do this in real time, unless your time constraints are very relaxed).
The idea is to use a library that converts the JPEG image into an array of data:
Using the library is fairly simple: we give it the JPEG file, and the library will start generating arrays of pixels – so called Minimum Coded Units, or MCUs for short. The MCU is a block of 16 by 8 pixels. The functions in the library will return the color value for each pixel as 16-bit color value. The upper 5 bits are the red value, the middle 6 are green and the lower 5 are blue. Now we can send these values by any sort of communication channel we like.
For your use case you won't send the data through the communication channel, but rather store it in a local array by pushing the blocks into adjacent tiles, then do the crop.
That depends on what kind of hardware (camera and board) you are using.
I'm basing this on the OV2640 camera module because it's the one I've been working with. It delivers the image to the frame buffer already encoded, so I'm guessing this might be what you are facing.
Trying to crop the image after it has been encoded can be tricky, but you might be able to instruct the camera chip to only deliver a certain part of the sensor output in the first place using a window function.
The easiest way to access this setting is to define a function to access this:
void setWindow(int resolution , int xOffset, int yOffset, int xLength, int yLength) {
sensor_t * s = esp_camera_sensor_get();
resolution = 0;
s->set_res_raw(s, resolution, 0, 0, 0, xOffset, yOffset, xLength, yLength, xLength, yLength, true, true);
}
/*
* resolution = 0 \\ 1600 x 1200
* resolution = 1 \\ 800 x 600
* resolution = 2 \\ 400 x 296
*/
where (xOffset,yOffset) is the origin of the window in pixels and (xLength,yLength) is the size of the window in pixels. Be aware that changing the resolution will effectively overwrite these settings. Otherwise this works great for me, although for some reason only if the aspect ratio of 4:3 is preserved in the window size.
Looking at the output format table for the ESP32 Camera Driver one can see that most output formats are non-jpeg. If you can handle a RAW format instead (it will be slower to save/transfer, and be MUCH larger) then that would allow you to more easily crop the image by make a copy with a couple of loops. JPEG is compressed and not easily cropped. The page linked also mentions this:
Using YUV or RGB puts a lot of strain on the chip because writing to PSRAM is not particularly fast. The result is that image data might be missing. This is particularly true if WiFi is enabled. If you need RGB data, it is recommended that JPEG is captured and then turned into RGB using fmt2rgb888 or fmt2bmp/frame2bmp
If you are using PIXFORMAT_RGB565 (which means each pixel value will be kept in TWO bytes, and the image is not jpeg compressed) and FRAMESIZE_SVGA (800x600 pixels), you should be able to access the framebuffer as a two-dimensional array if you want:
uint16_t *buffer = fb->buf;
uint16_t pxl = buffer[row * 800 + column]; // 800 is the SVGA width
// pxl now contains 5 R-bits, 6 G-bits, 5 B-bits

How to use KissFFT with audio?

I have an array of 2048 samples of an audio file at 44.1 khz and want to transform it into a spectrum for an LED effect. I don't know too much about the inner workings of fft but I tryed it using kiss fft:
kiss_fft_cpx *cpx_in = malloc(FRAMES * sizeof(kiss_fft_cpx));
kiss_fft_cpx *cpx_out = malloc(FRAMES * sizeof(kiss_fft_cpx));
kiss_fft_cfg cfg = kiss_fft_alloc( FRAMES , 0 ,0,0 );
for(int j = 0;j<FRAMES;j++) {
float x = (alsa_buffer[(fft_last_index+j+BUFFER_OVERSIZE*FRAMES)%(BUFFER_OVERSIZE*FRAMES)] - offset);
cpx_in[j] = (kiss_fft_cpx){.r = x, .i = x};
}
kiss_fft(cfg, cpx_in, cpx_out);
My output seems really off. When I play a simple sine, there multiple outputs with values way above zero. Also it generally seems like the first entries are way higher. Do I have to weigh the outputs?
I also don't understand how I have to treat the complex numbers, I'm currently using my input values on the real and imaginary part and for the output I use the abs, is that right?
Also usually spectrum analyzers for audio have logarithmic scaling, so I tried that but the problem is that the fft output as far as I know isn't logarithmic, so the first band for example is say 0-100hz but optimally my first LED on the effect should be only up to like 60hz (so a fraction of the first outputs band), while the last LED would be say 8khz to 10khz which would in that case be 20 fft outputs.
Is there any way to make the output logarithmic? How do I limit the spectrum to 20khz (or know what the bands of the output are in general) and is there any other thing to look out for when working with audio signals?

FFTW Output differs from Matlab with the same input dataset

I am developing an application that should analyze data coming from an A/D stage and find the frequency peaks in a defined frequency range (0-10kHz).
We are using the FFTW3 library, version 3.3.6, running on 64bit Slackware Linux (GCC version 5.3.0). As you can see in the piece of code included, we run the FFTW plan getting result in complex vector result[]. We have verified the operations using MATLAB. We run the FFT on MATLAB (that claims to use the same library) with exactly the same input datasets (complex signal[] as in the source code). We observe some difference between FFTW (Linux ANSI C) and MATLAB run. Each plot is done using MATLAB. In particular, we would like to understand (mag[] array):
Why is the noise floor so different?
After the main peak (at more or less 3kHz) we observe a negative peak in the Linux result, while MATLAB shows correctly a secondary peak as from the input signal.
In these examples, we do not perform any output normalization, neither in Linux nor in MATLAB. The two plots show the magnitude of the FFT results (not converted to dB).
The correct result is the MATLAB one. Does someone have any suggestion about this differences? And how can we produce with the FFTW library results closer to MATLAB?
Below the piece of C source code and the two plots.
//
// Part of source code:
//
// rup[] is filled with unsigned char data coming from an A/D conversion stage (8 bit depth)
// Sampling Frequency is 45.454 KHz
// Frequency Range: 0 - 10.0 KHz
//
#define CONVCOST 0.00787401574803149606
double mag[4096];
unsigned char rup[4096];
int i;
fftw_complex signal[1024];
fftw_complex result[1024];
...
fftw_plan plan = fftw_plan_dft_1d(1024,signal,result,FFTW_FORWARD,FFTW_ESTIMATE);
for(i=0;i<1024;i++)
{
signal[i][REAL] = (double)rup[i] * CONVCOST;
signal[i][IMAG] = 0.0;
}
fftw_execute(plan);
for (i = 0; i < 512; ++i)
{
mag[i] = sqrt(result[i][REAL] * result[i][REAL] + result[i][IMAG] * result[i][IMAG]);
}
fftw_destroy_plan(plan);

Converting from YUY2 to RGB24 from V4L2 API

I am trying to use the V4L2 API to capture images and put the images into an opencv Mat. The problem is my webcam only captures in YUYV (YUY2) So I need to convert to RGB24 first. Here is the complete V4L2 code that I am using.
I was able to get objects in the picture to be recognizable, but it is all pink and green, and it is stretched horizontally and distorted. I have tried many different conversion formulas and I have had the same basic pink/green distorted image. The formula used for this picture is from http://paulbourke.net/dataformats/yuv/. I am using the shotwell photo viewer on linux to view the .raw image. I couldn't get gimp to open it. I am not that knowledgable with how to save image formats, but I am assuming there has to be some kind of header but the shotwell photo viewer seemed to work. Could this possibly be reason for the incorrect image?
I am not sure if V4l2 is returning a signed or unsigned byte image which is pointed to by p. But if this were the problem woudln't my image would just be off-color? But it seems the geometry is distorted too. I believe I took care of the casting to and from floating point properly.
Could someone help me understand
how to find out the underlying type contained in the *void p variable
the proper formula for converting from YUYV to RGB24 including explanations of which types to use
could saving the image with no format (headers) and viewing with Shotwell be the problem?
is there an easy way to save an RGB24 image properly.
general debugging tips
Thanks
static unsigned char *bgr_image;
static void process_image(void *p, int size)
{
frame_number++;
char filename[15];
sprintf(filename, "frame-%d.raw", frame_number);
FILE *fp=fopen(filename,"wb");
int i;
float y1, y2, u, v;
char * bgr_p = bgr_image;
unsigned char * p_tmp = (unsigned char *) p;
for (i=0; i < size; i+=4) {
y1 = p_tmp[i];
u = p_tmp[i+1];
y2 = p_tmp[i+2];
v = p_tmp[i+3];
bgr_p[0] = (y1 + 1.371*(u - 128.0));
bgr_p[1] = (y1 - 0.698*(u - 128.0) - 0.336*(v - 128.0));
bgr_p[2] = (y1 + 1.732*(v - 128.0));
bgr_p[3] = (y2 + 1.371*(v - 128.0));
bgr_p[4] = (y2 - 0.698*(v - 128.0) - 0.336*(u - 128.0));
bgr_p[5] = (y2 + 1.732*(u - 128.0));
bgr_p+=6;
}
fwrite(bgr_image, size, 1, fp);
fflush(fp);
fclose(fp);
}
First, you must understand with what type of YUV422 you are working.
PIX_FMT_YUYV422, ///< packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr
PIX_FMT_UYVY422, ///< packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1
Try replacing y1, u, y2, and v accordingly, but you maybe be not dealing with YUV422 at all, the picture could be a planar, instead of a packed format you are expecting?
I think its better for you to download IrfanViewer, which has a raw yuv file open functionality and try picking the correct values to have a correctly decoded image to find what type of data you are using.
do not try to re-invent the wheel. lots of people have written colorspace-converters and chances are high that your implementation (even if it works) is not the "optimal" one (e.g. being slower than necessary).
the canonical way to deal with V4L2 devices of any colourspace is to use the libv4l-library, which will transparently convert the cameras native colorspace to once of BGR24, RGB24 and YUV420 (if you desire that, which i think is true).
as for saving the image, again use what is already there. personally, i would use imagemagick to save a frame in a "proper" format that can be read by any imageviewer (png or tiff, if quality matters)

How do I convert a G.726 ADPCM signal into a PCM signal?

I usually look to SoX or Window's built in audio libraries for this stuff, but it appears that neither have G.726 codecs.
So I have a sequence of bytes that I know are encoded as G.726 although the bit-rate and whether it is mu-law or A-law is not known at this time (experimentation will determine those parameters), and I need to decode them into a normal PCM signal.
So I downloaded the reference implementation from the ITU-T (ITU-T Recommendation G.191) but I'm kind of confused on how to use the G726_decode function. According to the documentation inp_buf and out_buf need to have the same length smpno and both buffers are 16-bit buffers. This seems to me like a step is missing; otherwise no compression is accomplished by using G.726. According to the Wikipedia page on G.726 sample size depends on bit rate (from 2 to 5 bits). Am I supposed to do the decompression into samples myself? So if I assume maximum compression (2 bit samples) then each byte will produce 4 samples.
Example:
char b = /* read the code from input */
short inp[4], output[4];
inp[0] = b & 0x0003;
inp[1] = b & 0x000C >> 2;
inp[2] = (b & 0x0030) >> 4;
inp[3] = (b & 0x00C0) >> 6;
G726_state state;
memset(&state, 0, sizeof(G726_state));
G726_decode(inp, output, 4, "u", 2, 1, &state);
/* ouput now contains 4 PCM samples */
Or am I missing something completely?
Looks like ffmpeg actually isn't able to do this, as I thought it surely would be able to... however, while I was googling I did find this post to the ffmpeg mailing list which offers a solution.
Basically, there is a separate program called g72x++ which seems to be able to decode the audio to raw PCM for you.

Resources