Getting a segmentation fault at certain moment - c

I tried to apply box blur to an image (without a matrix but just iterating over 9 neighbooring pixels) but I am always getting a segmentation fault after I get to 408th pixel of an image (on the 1st row). I don't know what could cause it because debugging with printf() didn't show any meaningful results
void blur(int height, int width, RGBTRIPLE image[height][width])
{
BYTE totalRed, totalGreen, totalBlue;
totalRed = totalGreen = totalBlue = 0;
for (int i = 1; i < height - 1; i++)
{
for (int j = 1; j < width - 1; j++)
{
for (int h = -1; h <= 1; h++)
{
for (int w = -1; w <= 1; w++)
{
totalRed += image[i + h][j + w].rgbtRed;
totalGreen += image[i + h][j + w].rgbtGreen;
totalBlue += image[i + h][j + w].rgbtBlue;
}
}
image[j][i].rgbtRed = round((totalRed / 9));
image[j][i].rgbtGreen = round((totalGreen / 9));
image[j][i].rgbtBlue = round((totalBlue / 9));
}
}
return;
}
EDIT
I fixed the issue, thanks to everyone who answered me.

The problem is you transposed the index values for storing the updated value: image[j][i].rgbtRed = round((totalRed / 9)) should be
image[i][j].rgbtRed = round((totalRed / 9));
image[i][j].rgbtGreen = round((totalGreen / 9));
image[i][j].rgbtBlue = round((totalBlue / 9));
Note however that you overwrite the pixels in row i that will be used for blurring the next row, which is incorrect. Also note that you should make special cases for the boundary rows and columns. More work is needed on the algorithm.

I would suggest you to post a minimal "working" example that we could compile and reproduce results on something like Compiler Explorer.
As #Fe2O3 commented on the original post, you have i and j flipped in these assignments:
image[j][i].rgbtRed = round((totalRed / 9));
image[j][i].rgbtGreen = round((totalGreen / 9));
image[j][i].rgbtBlue = round((totalBlue / 9));
Which could cause problems whenever the images are not squares.
Additionally, you're using a byte-sized variable to store the sum of 9 bytes worth of bytes, meaning your max value will be 9*255=2295. I'd highly recommend you upgrading the type of totalRed/Green/Blue to at least 16 bits.
Finally, as #[Some Programmer Dude] suggested, there's nothing to round as in C division of integers will not convert the resulting value to float/double. The value will be truncated, so your result will look like
if (x > 0) {
floor(x)
} else if (x < 0) {
ceil(x)
} else {
crash_and_burn()
}

Related

Blur filter in C results in only a slightly changed image

i am trying to make a blur filter in c that takes the neighboring pixels of the main pixel, takes the avarage of the rgb values and stores it in the temp array, them changes the image using the temp array values, it seems correct but it is not working as intended, giving an output of a very slightly blured image. I realy dont see my mistake and would be very thankful if someone helped, sorry if i made something horrible, started learning c last week.
i checked this post
Blurring an Image in c pixel by pixel - special cases
but i did not see were i went wrong.
im working with this data struct
BYTE rgbtBlue;
BYTE rgbtGreen;
BYTE rgbtRed;
void blur(int height, int width, RGBTRIPLE image[height][width])
{
// ints to use later
int j;
int p;
RGBTRIPLE temp[height][width];
for(int n = 0; n < height; n++) // loop to check every pixel
{
for(int k = 0; k < width; k++)
{
int widx = 3;
int hghtx = 3;
// conditionals for border cases
int y = 0;
if(n == 0)
{
p = 0;
hghtx = 2;
}
if(n == height - 1)
{
p = -1;
hghtx = 2;
}
if(k == 0)
{
j = 0;
widx = 2;
}
if(k == width - 1)
{
j = -1;
widx = 2;
}
for(int u = 0; u < hghtx; u++) // matrix of pixels around the main pixel using the conditionals gathered before
for(int i = 0; i < widx; i++)
if(y == 1) // takes the average of color and stores it in the RGB temp
{
temp[n][k].rgbtGreen = temp[n][k].rgbtGreen + image[n + p + u][k + j + i].rgbtGreen / (hghtx * widx);
temp[n][k].rgbtRed = temp[n][k].rgbtRed + image[n + p + u][k + j + i].rgbtRed / (hghtx * widx);
temp[n][k].rgbtBlue = temp[n][k].rgbtBlue + image[n + p + u][k + j + i].rgbtBlue / (hghtx * widx);
}
else // get first value of temp
{
temp[n][k].rgbtGreen = (image[n + p + u][k + j + i].rgbtGreen) / (hghtx * widx);
temp[n][k].rgbtRed = (image[n + p + u][k + j + i].rgbtRed) / (hghtx * widx);
temp[n][k].rgbtBlue = (image[n + p + u][k + j + i].rgbtBlue) / (hghtx * widx);
y++;
}
}
}
// changes the original image to the blured one
for(int n = 0; n < height; n++)
for(int k = 0; k < width; k++)
image[n][k] = temp[n][k];
}
I think it's a combination of things.
If the code worked the way you expect, you would be still doing a blur of just 3x3 pixels and that can be hardly noticeable, especially on large images (I'm pretty sure it will be unnoticeable on an image 4000x3000 pixels)
There are some problems with the code.
As #Fe2O3 says, at the end of the first line, widx will change to 2 and stay 2 for the rest of the image.
you are reading from temp[][] without initializing it. I think that if you compile that in release mode (not debug), temp[][] will contain random data and not all zeros as you probably expect. (as #WeatherWane pointed out)
The way you calculate the average of the pixels is weird. If you use a matrix 3x3 pixels, each pixel value shoud be divided by 9 in the final sum. But you divide the first pixel nine times by 2 (in effect doing /256), the second one eight times by 2 (so its pixel/128) etc. until the last one is divided by 2. So basically, it's mostly the value of the bottom right pixel.
also, since your RGB values are just bytes, you may want to divide them first and only then add them, because otherwise, you'll get overflows with wild results.
Try using a debugger to see the values you are actually calculating. It can be quite an eye opener :)

Nested Loops in OpenACC

I'm trying to parallelize a nested for loop with OpenACC. I don't understand why my code isn't working correctly, The following is the relevant part of my code:
int edgedetect_laplace(int height, // I: image height
int width, // I: image width
gray_t image[height][width], // I: input image
gray_t new_image[height][width]) // O: output image
{
// just for reproducable checksums...
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
new_image[i][j] = 0;
}
}
#pragma acc data copyin(image[:height][:width]) copyout(new_image[:height][:width])
{
#pragma acc parallel
{
#pragma acc loop
for (int i = 1; i < height - 1; i++)
{
#pragma acc loop
for (int j = 1; j < width - 1; ++j)
{
// apply laplace operator
unsigned int val = 4 * image[i][j] - image[i - 1][j] - image[i + 1][j] - image[i][j - 1] - image[i][j + 1];
/* store calculated value (map to correct range) */
new_image[i][j] = min(val, GRAY_MAX);
}
}
}
}
printf("time Laplace edge detection: %.6f s\n", t1 - t0);
unsigned long cs = checksum(height, width, new_image);
if (cs != REFERENCE_CHECKSUM_LAPLACE)
printf("\t error checksum Laplace: expected %lu, seen %lu\n", REFERENCE_CHECKSUM_LAPLACE, cs);
else
printf("checksum Laplace OK : %lu\n", cs);
return 0;
}
I have run the program sequentially and calculated the checksum to test if my parallelized version runs correctly. However, it does not (I'm getting a different checksum) and I don't see why.
It might be because you're assigning values to the interior of "new_image" but copying out the whole array. Since it's not initialized on the device, this means the halos will contain garbage values when copied back. Try using "copy" instead of "copyout" so "new_image" is initialized, or just copyout the interior elements.
If that's not the issue, please provide a minimally reproducing example.

How can I properly implement zero cross triggering for digital oscilloscope in C?

So I'm doing a simple oscilloscope in C. It reads audio data from the output buffer (and drops buffer write counter when called so the buffer is refreshed). I tried making simple zero-cross triggering since most of the time users will see simple (sine, pulse, saw, triangle) waves but the best result I got with the code below is a wave that jumps back and forth for half of its cycle. What is wrong?
Signal that is fed in goes from -32768 to 32767 so zero is where it should be.
If you didn't understand what I meant you can see the video: click
Upd: Removed the code unrelated to triggering so all function may be understood easier.
extern Mused mused;
void update_oscillscope_view(GfxDomain *dest, const SDL_Rect* area)
{
if (mused.output_buffer_counter >= OSC_SIZE * 12) {
mused.output_buffer_counter = 0;
}
for (int x = 0; x < area->h * 0.5; x++) {
//drawing a black rect so bevel is hidden when it is under oscilloscope
gfx_line(domain,
area->x, area->y + 2 * x,
area->x + area->w - 1, area->y + 2 * x,
colors[COLOR_WAVETABLE_BACKGROUND]);
}
Sint32 sample, last_sample, scaled_sample;
for (int i = 0; i < 2048; i++) {
if (mused.output_buffer[i] < 0 && mused.output_buffer[i - 1] > 0) {
//here comes the part with triggering
if (i < OSC_SIZE * 2) {
for (int x = i; x < area->w + i; ++x) {
last_sample = scaled_sample;
sample = (mused.output_buffer[2 * x] + mused.output_buffer[2 * x + 1]) / 2;
if (sample > OSC_MAX_CLAMP) { sample = OSC_MAX_CLAMP; }
if (sample < -OSC_MAX_CLAMP) { sample = -OSC_MAX_CLAMP; }
if (last_sample > OSC_MAX_CLAMP) { last_sample = OSC_MAX_CLAMP; }
if (last_sample < -OSC_MAX_CLAMP) { last_sample = -OSC_MAX_CLAMP; }
scaled_sample = (sample * OSC_SIZE) / 32768;
if(x != i) {
gfx_line(domain,
area->x + x - i - 1, area->h / 2 + area->y + last_sample,
area->x + x - i, area->h / 2 + area->y + scaled_sample,
colors[COLOR_WAVETABLE_SAMPLE]);
}
}
}
return;
}
}
}
During debugging, I simplified the code until it started working. Thanks Clifford.
I found a trigger index i (let's say it is array index 300). Modified it so that the oscilloscope was drawing lines from [(2 * i) + offset] to [(2 * i + 1) + offset], thus an incorrect picture was formed.
I used (2 * i), because I wanted long waves to fit into oscilloscope. I replaced it with drawing from [i + offset] to [i + 1 + offset] and that solved a problem.
Afterwards, I implemented "horizontal scale 0.5x properly.
The output waveform still jumps a little, but overall it holds it in place.

Wrong result for non square image

I am trying to implement dark (not exactly)emboss filter, my problem is when I use it on SQUARED Lena image 512x512 result is good.
But when I use it on image which has rectangular shape e.g. 1280x720 result is all messed up, why is it so? Format of images is RGB.
GOOD result with Lena 512x512 (original):
WRONG result with 1280x720 image (original not same size just for comparison):
For a 24bit image, if the width of the image is 682 then it needs padding. Because 682*3 is not a multiple of 4. Try changing the image width to 680 and try again.
To pad the image rows, use the following formula:
int pad = WIDTH % 4;
if(pad == 4) pad = 0;
WIDTH += pad;
Change the condition to fb_j < HEIGHT - 1 - FILTER_HEIGHT and fb_i < WIDTH - 1 - FILTER_WIDTH to avoid buffer overflow.
The bitmap is scanned from top to bottom. It works fine when I switched the dimension as follows (but I loaded the bitmap differently)
//Pixel frame_buffer[WIDTH][HEIGHT];
//Pixel temp_buffer[WIDTH][HEIGHT];
Pixel frame_buffer[HEIGHT][WIDTH];
Pixel temp_buffer[HEIGHT][WIDTH];
...
for(int fb_j = 1; fb_j < HEIGHT - 1 - FILTER_HEIGHT; fb_j++) {
for(int fb_i = 1; fb_i < WIDTH - 1 - FILTER_WIDTH; fb_i++) {
float r = 0, g = 0, b = 0;
for(int ker_i = 0; ker_i < FILTER_WIDTH; ker_i++) {
for(int ker_j = 0; ker_j < FILTER_HEIGHT; ker_j++) {
r += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].r / 255.0) * emboss_kernel[ker_j][ker_i]);
g += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].g / 255.0) * emboss_kernel[ker_j][ker_i]);
b += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].b / 255.0) * emboss_kernel[ker_j][ker_i]);
}
}
if(r > 1.0) r = 1.0;
else if(r < 0) r = 0;
if(g > 1.0) g = 1.0;
else if(g < 0) g = 0;
if(b > 1.0) b = 1.0;
else if(b < 0) b = 0;
// Output buffer which will be rendered after convolution
temp_buffer[fb_j][fb_i].r = (GLubyte)(r*255.0);
temp_buffer[fb_j][fb_i].g = (GLubyte)(g*255.0);
temp_buffer[fb_j][fb_i].b = (GLubyte)(b*255.0);
}
}
Also try running a direct copy for testing. Example:
temp_buffer[fb_j][fb_i].r = frame_buffer[fb_j][fb_i].r;
temp_buffer[fb_j][fb_i].g = frame_buffer[fb_j][fb_i].g;
temp_buffer[fb_j][fb_i].b = frame_buffer[fb_j][fb_i].b;

Applying blurring algorithm multiple times

I implemented a blurring algorithm and it works. The result is a blurred image but if a pass multiple times the algorithm to my image the image remains unchanged. It's like the extra (more than 1) passings are not having any effect.
for (f=0; f<100; f++) {
for (y = 0; y < image->h; y++) {
for (x = 0; x < image->w; x++) {
int SUM = 0;
imageBlur->pixels[y * imageBlur->w + x] = SUM / 9;
}
}
}
It doesn't matter if f is 1 or 500 it's still the same result as one pass blur.
Each pass you are reading the same image again without having replaced it with the blurred one: imageBlur
You need to do the assignment somehow - some thing like
image->pixels[y * imageBlur->w + x] = SUM / 9;

Resources