Super-sampling anti-aliasing for a curve via libpng - c

I tried to smoothen a line via the super-sampling anti-aliasing technique by adding transparency to neighboring pixels. Here is the code in C
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <png.h>
#define WIDTH 512
#define HEIGHT 512
int main()
{
// Image buffer
unsigned char image[HEIGHT][WIDTH][4];
for (int i = 0; i < HEIGHT; i++)
{
for (int j = 0; j < WIDTH; j++)
{
image[i][j][0] = 255;
image[i][j][1] = 255;
image[i][j][2] = 255;
image[i][j][3] = 0;
}
}
// A sample curve
for (double x = -M_PI; x <= M_PI; x += M_PI / WIDTH)
{
int y = (int)(HEIGHT / 2 - sin(x) * cos(x) * HEIGHT / 2);
int i = (int)(x * WIDTH / (2 * M_PI) + WIDTH / 2);
// The anti-aliasing part
int sample = 2; // how far we are sampling
double max_distance = sqrt(sample * sample + sample * sample);
for (int ii = -sample; ii <= sample; ii++)
{
for (int jj = -sample; jj <= sample; jj++)
{
int iii = i + ii;
int jjj = y + jj;
if (iii >= 0 && iii < WIDTH && jjj >= 0 && jjj < HEIGHT)
{
// Here is my question
int alpha = 255 - (int)(100.0 * sqrt(ii * ii + jj * jj) / max_distance);
image[jjj][iii][0] = 0;
image[jjj][iii][1] = 0;
image[jjj][iii][2] = 0;
image[jjj][iii][3] = alpha > image[jjj][iii][3] ? alpha : image[jjj][iii][3];
}
}
}
}
FILE *fp = fopen("curve.png", "wb");
png_structp png = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
png_infop info = png_create_info_struct(png);
png_init_io(png, fp);
png_set_IHDR(png, info, WIDTH, HEIGHT, 8, PNG_COLOR_TYPE_RGBA, PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE);
png_write_info(png, info);
for (int i = 0; i < HEIGHT; i++)
{
png_write_row(png, (png_bytep)image[i]);
}
png_write_end(png, NULL);
fclose(fp);
return 0;
}
Although it does some smoothing, but the result is far from the available programs. Where did I do wrong?
I tried to calculate the transparency based on the distance of each neighboring pixel from the center of the line:
int alpha = 255 - (int)(100.0 * sqrt(ii * ii + jj * jj) / max_distance);
I used the factor of 100 instead of 255 since we do not need to go deep into full transparency.
The problem is how to set the alpha value for each pixel based on neighboring pixels when the transparency of each neighbor is subject to change according to its neighbors?

Related

smoothening image boundaries for NxN kernel size

In my open source project ( https://github.com/mmj-the-fighter/GraphicsLabFramework ) i am trying to add a image smoothening box filter for NxN kernel size. I have already implemented the algorithm for 3x3 kernel size. As you can see from the source code below I am not processing the image for edges. Using this logic, for a 5x5 kernel size I have to skip two rows or columns from top, right, bottom and left of the image. So the edges will not be blurred. Is there any other solution.
Here is the code:
/*applying a box filter of size 3x3*/
void blur_image(unsigned char *img, int width, int height)
{
int n = width * height;
int i, j;
int r, g, b;
int x, y;
float v = 1.0 / 9.0;
float kernel[3][3] =
{
{ v, v, v },
{ v, v, v },
{ v, v, v }
};
unsigned char* resimage = (unsigned char *)malloc(width * height * 4 * sizeof(unsigned char));
memcpy(resimage, img, width*height * 4);
for (x = 1; x < width - 1; ++x) {
for (y = 1; y < height - 1; ++y) {
float bs = 0.0;
float gs = 0.0;
float rs = 0.0;
for (i = -1; i <= 1; ++i) {
for (j = -1; j <= 1; ++j){
float weight = (float)kernel[i + 1][j + 1];
unsigned char* buffer = img + width * 4 * (y + j) + (x + i) * 4;
bs += weight * *buffer;
gs += weight * *(buffer + 1);
rs += weight * *(buffer + 2);
}
}
unsigned char* outbuffer = resimage + width * 4 * y + x * 4;
*outbuffer = bs;
*(outbuffer + 1) = gs;
*(outbuffer + 2) = rs;
*(outbuffer + 3) = 255;
}
}
memcpy(img, resimage, width*height * 4);
free(resimage);
}

C issue, blur filter not working properly

Got a problem with code in C, the purpose is to blur given image working as a filter. The code reads height and width from RGBTRIPLE bmp.h file, makes a copy of each pixel in advance to compute the average in the middle pixel (when its 3x3 pixels chunk) or the boundary pixel's average (when its 2x3 chunk). I nested for-loops, 2 outer ones to copy each pixel from 'image', defined there 4 integers (3 doubles, 1 int.) to count each pixel's red, green and blue. The last int. is named counter to be my denominator in division.
The problem occurs not in syntax, but on the image. 4 down rows of pixels are like rainbow, each is different, not blurred. And the image is darkened.
When I don't use the pixels' copy it seems to work fine.
// Blur image
void blur(int height, int width, RGBTRIPLE image[height][width])
{
RGBTRIPLE copy[height][width];
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
// make a copy of rgbtriple image
copy[i][j].rgbtRed = image[i][j].rgbtRed;
copy[i][j].rgbtGreen = image[i][j].rgbtGreen;
copy[i][j].rgbtBlue = image[i][j].rgbtBlue;
// i need to ensure that image's pixels wont be out of bounds of rows/columns
// it's inappropriate to have static division by 9 because sometimes there will be less pixels to divide by
double sumRed = 0;
double sumGreen = 0;
double sumBlue = 0;
int count = 0;
for (int ii = i - 1; ii <= i + 1; ii++)
{
for (int jj = j - 1; jj <= j + 1; jj++)
{
if (ii >= 0 && ii < height && jj >= 0 && jj < width)
{
sumRed += copy[ii][jj].rgbtRed;
sumGreen += copy[ii][jj].rgbtGreen;
sumBlue += copy[ii][jj].rgbtBlue;
count++;
}
}
}
if (count != 0 && count <= 9)
{
image[i][j].rgbtRed = round(sumRed / count);
image[i][j].rgbtGreen = round(sumGreen / count);
image[i][j].rgbtBlue = round(sumBlue / count);
}
}
}
return;
}
Thanks in advance!
You compute the new value of the image pixels from the data in the copy matrix, but you did not copy the whole image before hand, only pixel values up to the current pixel. Hence the results are incorrect.
You should copy the whole image in a separate loop or using memcpy.
Here is a modified version:
#include <string.h>
// Blur image
void blur(int height, int width, RGBTRIPLE image[height][width]) {
RGBTRIPLE copy[height][width];
// make a copy of rgbtriple image
#if 1 // using memcpy
memcpy(copy, image, sizeof(copy));
#else
// if you cannot use memcpy
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
copy[i][j] = image[i][j];
}
}
#endif
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
// Mix the color values with the adjacent pixels
// making sure the pixels are inside the image.
// It is inappropriate to always divide by 9
// because depending on the pixel position and image size
// count can be 1, 2, 3, 4, 6 or 9
double sumRed = 0;
double sumGreen = 0;
double sumBlue = 0;
int count = 0;
for (int ii = i - 1; ii <= i + 1; ii++) {
for (int jj = j - 1; jj <= j + 1; jj++) {
if (ii >= 0 && ii < height && jj >= 0 && jj < width) {
sumRed += copy[ii][jj].rgbtRed;
sumGreen += copy[ii][jj].rgbtGreen;
sumBlue += copy[ii][jj].rgbtBlue;
count++;
}
}
}
// no need to test count: there is at least one pixel
image[i][j].rgbtRed = round(sumRed / count);
image[i][j].rgbtGreen = round(sumGreen / count);
image[i][j].rgbtBlue = round(sumBlue / count);
}
}
}

Scaling up an image using nearest-neighbor

I have been trying to make my program scale up an image. I had some problem to allocate new space for my scaled image, but I think it is fixed. The problem I am having is that the program crashes when I am trying to send back my image from my temporary memory holder.
The loaded image is placed in my struct Image. The pixels are placed in
img->pixels, the height in img->height and the width in img->width. But I have no idea why the program crashes when I transfer the pixels from my tmp2 struct to my img struct while it does not crash when I do the opposite. Here is the code:
void makeBigger(Image *img, int scale) {
Image *tmp2;
tmp2 = (Image*)malloc(sizeof(Image));
tmp2->height = img->height*scale;
tmp2->width = img->width*scale;
tmp2->pixels = (Pixel**)malloc(sizeof(Pixel*)*tmp2->height);
for (unsigned int i = 0; i < img->height; i++)
{
tmp2->pixels[i] = (Pixel*)malloc(sizeof(Pixel)*tmp2->width);
for (unsigned int j = 0; j < img->width; j++)
{
tmp2->pixels[i][j] = img->pixels[i][j];
}
}
free(img->pixels);
//scaling up the struct's height and width
img->height *= scale;
img->width *= scale;
img->pixels = (Pixel**)malloc(sizeof(Pixel*)*img->height);
for (unsigned int i = 0; i < tmp2->height; i++)
{
img->pixels[i] = (Pixel*)malloc(sizeof(Pixel)*img->width);
for (unsigned int j = 0; j < tmp2->width; j++)
{
img->pixels[i][j] = tmp2->pixels[i+i/2][j+j/2];
}
}
}
I would be glad if you have any idea of how to make the nearest-neighbor method to work.
EDIT: I am trying to crop the inner rectangle so I can scale it up (zoom).
Image *tmp = (Image*)malloc(sizeof(Image));
tmp->height = img->height / 2;
tmp->width = img->width / 2;
tmp->pixels = (Pixel**)malloc(sizeof(Pixel*) * tmp->height);
for (unsigned i = img->height / 4 - 1; i < img->height - img->height / 4; i++) {
tmp->pixels[i] = (Pixel*)malloc(sizeof(Pixel) * tmp->width);
for (unsigned j = img->width / 4; j < img->width - img->width / 4; j++) {
tmp->pixels[i][j] = img->pixels[i][j];
}
}
for (unsigned i = 0; i < img->height; i++) {
free(img->pixels[i]);
}
free(img->pixels);
img->height = tmp->height;
img->width = tmp->width;
img->pixels = tmp->pixels;
free(tmp);
I see that you're overcomplicating things (walking over the image twice for example).Here's the code (I am posting the whole program - I made assumptions about Pixel and Image that might not match what you have), but if you copy / paste makeBigger it should work in your code OOTB:
code00.c:
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
typedef uint32_t Pixel;
typedef struct {
uint32_t width, height;
Pixel **pixels;
} Image;
void makeBigger(Image *img, int scale)
{
uint32_t i = 0, j = 0;
Image *tmp = (Image*)malloc(sizeof(Image));
tmp->height = img->height * scale;
tmp->width = img->width * scale;
tmp->pixels = (Pixel**)malloc(sizeof(Pixel*) * tmp->height);
for (i = 0; i < tmp->height; i++) {
tmp->pixels[i] = (Pixel*)malloc(sizeof(Pixel) * tmp->width);
for (j = 0; j < tmp->width; j++) {
tmp->pixels[i][j] = img->pixels[i / scale][j / scale];
}
}
for (i = 0; i < img->height; i++)
free(img->pixels[i]);
free(img->pixels);
img->width = tmp->width;
img->height = tmp->height;
img->pixels = tmp->pixels;
free(tmp);
}
void printImage(Image *img)
{
printf("Width: %d, Height: %d\n", img->width, img->height);
for (uint32_t i = 0; i < img->height; i++) {
for (uint32_t j = 0; j < img->width; j++)
printf("%3d", img->pixels[i][j]);
printf("\n");
}
printf("\n");
}
int main()
{
uint32_t i = 0, j = 0, k = 1;
Image img;
// Initialize the image
img.height = 2;
img.width = 3;
img.pixels = (Pixel**)malloc(sizeof(Pixel*) * img.height);
for (i = 0; i < img.height; i++) {
img.pixels[i] = (Pixel*)malloc(sizeof(Pixel) * img.width);
for (j = 0; j < img.width; j++)
img.pixels[i][j] = k++;
}
printImage(&img);
makeBigger(&img, 2);
printImage(&img);
// Destroy the image
for (i = 0; i < img.height; i++)
free(img.pixels[i]);
free(img.pixels);
printf("\nDone.\n");
return 0;
}
Notes (makeBigger related - designed to replace the content of the image given as argument):
Construct a temporary image that will be the enlarged one
Only traverse the temporary image once (populate its pixels as we allocate them); to maintain scaling to the original image and make sure that the appropriate pixel is "copied" into the new one, simply divide the indexes by the scaling factor: tmp->pixels[i][j] = img->pixels[i / scale][j / scale]
Deallocate the original image content: since each pixel row is malloced, it should also be freed (free(img->pixels); alone will yield memory leaks)
Store the temporary image content (into the original one) and then deallocate it
Output:
[cfati#cfati-5510-0:/cygdrive/e/Work/Dev/StackOverflow/q041861274]> ~/sopr.sh
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[064bit prompt]> ls
code00.c
[064bit prompt]> gcc -o code00.exe code00.c
[064bit prompt]> ./code00.exe
Width: 3, Height: 2
1 2 3
4 5 6
Width: 6, Height: 4
1 1 2 2 3 3
1 1 2 2 3 3
4 4 5 5 6 6
4 4 5 5 6 6
Done.

How can I improve locality of reads and writes in the following code?

I'm working on the following image convolution code:
typedef struct fmatrix{
int rows;
int cols;
float** array;
} fmatrix;
typedef struct image{
unsigned char* data;
int w;
int h;
int c;
} image;
typedef struct kernel{
fmatrix* psf;
int divisor;
} kernel;
void convolve_sq(image* src, image* dst, kernel* psf, int pixel){
int size = psf->psf->rows * psf->psf->cols;
float tmp[size];
int n, m; //for psf
int x, y, x0, y0, cur; //for image
y0 = pixel / (src->w * src->c);
x0 = (pixel / src->c) % src->w;
for (n = 0; n < psf->psf->rows; ++n){
for (m = 0; m < psf->psf->cols; ++m){
y = n - (psf->psf->rows / 2);
x = m - (psf->psf->cols / 2);
if ((y + y0) < 0 || (y + y0) >= src->h || (x + x0) < 0 || (x + x0) >= src->w){
tmp[n*psf->psf->rows+m] = 255 * psf->psf->array[n][m];
}
else{
cur = (pixel + y * src->w * src->c + x * src->c);
tmp[n*psf->psf->rows+m] = src->data[cur] * psf->psf->array[n][m]; //misses on read
}
}
}
m = 0;
for (n = 0; n < size; ++n){
m += (int) tmp[n];
}
m /= psf->divisor;
if (m < 0) m = 0;
if (m > 255) m = 255;
dst->data[pixel] = m; //misses on write
}
void convolve_image(image* src, image* dst, kernel* psf){
int i, j, k;
for (i = 0; i < src->h; ++i){
for (j = 0; j < src->w; ++j){
for (k = 0; k < src->c; ++k){
convolve_sq(src, dst, psf, (i * src->w * src->c + j * src->c + k) );
}
}
}
}
Running cachegrind, I've determined two places where there are a substantial number of cache misses, which I've annotated in the code above. For the line marked "misses on read", there were 97,205 D1mr and 97,201 DLmr. For the line marked "misses on write", there were 97,201 D1mw and DLmw. These lines read and write directly to/from the image respectively.
How can I make this code more efficient, in terms of avoiding cache misses?

Otsu thresholding

I'm trying to calculate a threshold value using Otsu's method. Sample image is 8-bit, grayscale, bmp file.
Histogram for that image generated by following:
/* INITIALIZE ARRAYS */
for(int i = 0; i < 255; i++) occurrence[i] = 0;
for(int i = 0; i < 255; i++) histogram[i] = 0;
/* START AT BEGINNING OF RASTER DATA */
fseek(input_img, (54 + 4 * color_number), SEEK_SET);
/* READ RASTER DATA */
for(int r = 0; r <= original_img.rows - 1; r++) {
for(int c = 0; c <= original_img.cols -1; c++) {
fread(p_char, sizeof(char), 1, input_img);
pixel_value = *p_char;
/* COUNT OCCURRENCES OF PIXEL VALUE */
occurrence[pixel_value] = occurrence[pixel_value] + 1;
total_pixels++;
}
}
for(int i = 0; i <= 255; i++) {
/* TAKES NUMBER OF OCCURRENCES OF A PARTICULAR PIXEL
* AND DIVIDES BY THE TOTAL NUMBER OF PIXELS YIELDING
* A RATIO */
histogram[i] = (float) occurrence[i] / (float) total_pixels;
}
Histogram then passed to function otsu_method in main:
threshold_value = otsu_method(histogram, total_pixels);
Function otsu_method:
int otsu_method(float *histogram, long int total_pixels) {
double omega[256], myu[256];
double max_sigma, sigma[256];
int threshold;
omega[0] = histogram[0];
myu[0] = 0.0;
for(int i = 1; i < 256; i++) {
omega[i] = omega[i - 1] + histogram[i];
myu[i] = myu[i - 1] + i * histogram[i];
}
threshold = 0;
max_sigma = 0.0;
for(int i = 0; i < 255; i++) {
if(omega[i] != 0.0 && omega[i] != 1.0)
sigma[i] = pow(myu[255] * omega[i], 2) / (omega[i] * (1.0 - omega[i]));
else
sigma[i] = 0.0;
if(sigma[i] > max_sigma) {
max_sigma = sigma[i];
threshold = i;
}
}
printf("Threshold value: %d\n", threshold);
return threshold;
}
And binarize image based on threshold value:
void threshold_image(FILE* input_file, FILE* output_file, unsigned long vector_size, int threshold_value) {
unsigned char* p_char;
unsigned char dummy;
struct_img binary_img;
unsigned char* binary_data;
dummy = '0';
p_char = &dummy;
binary_img.data = malloc(vector_size * sizeof(char));
if(binary_img.data == NULL) printf("Failed to malloc binary_img.data\n");
binary_data = binary_img.data;
/* CONVERT PIXEL TO BLACK AND WHITE BASED ON THRESHOLD VALUE */
for(int i = 0; i < vector_size - 1; i++) {
fread(p_char, sizeof(char), 1, input_file);
if(*p_char < threshold_value) *(binary_data + i) = 0;
else *(binary_data + i) = 255;
fwrite((binary_data + i), sizeof(char), 1, output_file);
}
/* FREE ALLOCATED MEMORY */
free(binary_data);
}
Program output:
Reading file 512gr.bmp
Width: 512
Height: 512
File size: 263222
# Colors: 256
Vector size: 262144
Total number of pixels: 262144
Threshold value: 244
I think 244 is not a properly computed threshold value, because when function threshold_image binarize image with that all pixels converted to black.
If I skip otsu_method and get threshold value from user input function threshold_image works properly.
Function otsu_method is copy-pasted code, for this reason I'm not crystal clear about variables or conditions.
I'm learning about image processing and trying to figure out basics. Any information about Otsu's algorithm and any feedback about my code helps.
I found what caused the problem and changed function otsu_method:
int otsu_method(float *histogram, long int total_pixels) {
double probability[256], mean[256];
double max_between, between[256];
int threshold;
/*
probability = class probability
mean = class mean
between = between class variance
*/
for(int i = 0; i < 256; i++) {
probability[i] = 0.0;
mean[i] = 0.0;
between[i] = 0.0;
}
probability[0] = histogram[0];
for(int i = 1; i < 256; i++) {
probability[i] = probability[i - 1] + histogram[i];
mean[i] = mean[i - 1] + i * histogram[i];
}
threshold = 0;
max_between = 0.0;
for(int i = 0; i < 255; i++) {
if(probability[i] != 0.0 && probability[i] != 1.0)
between[i] = pow(mean[255] * probability[i] - mean[i], 2) / (probability[i] * (1.0 - probability[i]));
else
between[i] = 0.0;
if(between[i] > max_between) {
max_between = between[i];
threshold = i;
}
}
return threshold;
}
What really differs:
between[i] = pow(mean[255] * probability[i] - mean[i], 2) / (probability[i] * (1.0 - probability[i]));
Program output:
Reading file 512gr.bmp
Width: 512
Height: 512
File size: 263222
# Colors: 256
Vector size: 262144
Total number of pixels: 262144
Threshold value: 117
Probability: 0.416683
Mean: 31.9631
Between varaince: 1601.01

Resources