Having trouble with Box Blur Edges - c

I am a newbie hobbyist trying to program a box blur and I am having trouble with respect to edges. I am hoping that someone can spot the error.
The edges are black and I am assuming that it's because the borders are not being reflected properly. I am sure this has been discussed with a fix size kernel however I am using a variable sized kernel.
I am using the code found on another post --
Optimized float Blur variations
However I just do not understand the reflected borders portion.
I do not really care if optimized or not nor do I care about other kernel shapes, box shape will be just fine.
The code is
{// code from https://stackoverflow.com/questions/7860575/optimized-float-blur-variations
//--------------------------------------
int image_width ;
int image_height ;
int scale = 0;
int weight = (radius * 2) + 1;
int kernel_X = 0;
int kernel_Y = 0;
//--------------------------------------
float sum = 0.0;
int kernel_width = radius;//set both to the same to make the kernel square
int kernel_height = radius;//set both to the same to make the kernel square
// HORIZONTAL
for(iy = 0; iy < image_height ;iy++)
{
sum = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (kernel_X = 0; kernel_X <= kernel_width; kernel_X++)
{
if (kernel_X >= 0 && kernel_X < image_width)
//sum += src[iy * image_width ];
sum += src[iy * image_width + kernel_X];
}
//>-------------- border code does not reflect edges HELP!!
// Wrap watch for left side of image & resulting black bar
for (kernel_X = (image_width - kernel_width); kernel_X < image_width; kernel_X++)
{
// if (kernel_X >= 0 && kernel_X < image_width)// HORIZONTAL width = horizontal = X
// sum += src[iy * kernel_width + image_width ];//<-------------------enter tester formula here
// sum += src[iy + ix * image_width + kernel_X];//<-------------------FAIL
// sum += src[iy * kernel_width + image_width ];//<-------------------streaky
}
// Store first window
tmp[iy * image_width] = (sum / weight );
for(ix = 1; ix < image_width; ix++)
{
// Subtract pixel leaving window
if (ix - kernel_width - 1 >= 0)
sum -= src[iy * image_width + ix - kernel_width - 1];
// Add pixel entering window
if (ix + kernel_width < image_width)
sum += src[iy * image_width + ix + kernel_width];
else
sum += src[iy * image_width + ix + kernel_width - image_width];
tmp[iy * image_width + ix] = (sum / weight);//just for testing
}
}
// VERTICAL
for(ix = 0; ix < image_width; ix++)
{
sum = 0.0;
// Process entire window for first pixel
for (kernel_Y = 0; kernel_Y <= kernel_height; kernel_Y++)
{
if (kernel_Y >= 0 && kernel_Y < image_height)
sum += tmp[kernel_Y * image_width + ix];
}
//>-------------- border code does not reflect edges HELP!!
// Wrap watch for top side of image & resulting black bar
for (kernel_Y = image_height-kernel_height; kernel_Y < kernel_height; kernel_Y++)
{
//if (kernel_Y >= 0 && kernel_Y < image_height)
// sum += tmp[(iy + kernel_height - image_height) * image_width + ix];
}
for(iy=1;iy< image_height ;iy++)
{
// Subtract pixel leaving window
if (iy-kernel_height-1 >= 0)
sum -= tmp[(iy - kernel_height-1) * image_width + ix];
// Add pixel entering window
if (iy + kernel_height < image_height)
sum += tmp[(iy + kernel_height) * image_width + ix];
else
sum += tmp[(iy + kernel_height - image_height) * image_width + ix];
dst[ (scale * image_width * image_height) + (iy * image_width + ix) ] = (sum / weight);
}
}
}
I appreciate any help on this.
Thanks
John
edit here are some links of image examples of the edges.
image with proper box blur
http://img687.imageshack.us/img687/931/standardboxblur.jpg
Image with improper edges using the above code (notice dark bar on Top and Left edges, bottom and right are not quite right either)
http://img202.imageshack.us/img202/5137/boxblurbadedges.jpg

It might be easiest if you put your sampling into a separate routine which, given an x and y coordinate, returns the pixel value. You can then do some checks and clamp the x and y values to be between 0 and width and 0 and height, respectively. Then you can safely pass in negative values or values greater than width or height. It also allows you to more easily try other schemes like reflection, clamping to a color, extrapolation, etc. Simply swap out the sampling function that clamps with one that does some other behavior.

Related

OCR : why are my results worse when i apply a median filter?

I am currently developing an OCR for a sudoku and i am trying to first get a clean black and white image. I first apply a grayscale then a median filter then an otsu algorithm.
My problem is that my results are better when i dont apply my median filter.
Does anyone know why ?
starting image
with my median filter
without my median filter
here is the code for my median filter :
void median_filter(SDL_Surface *image) {
int width = image->w;
int height = image->h;
for (int y = 1; y < height - 1; y++) {
for (int x = 1; x < width - 1; x++) {
Uint8 gray_values[9];
int index = 0;
for (int dy = -1; dy <= 1; dy++) {
for (int dx = -1; dx <= 1; dx++) {
int pixel_offset = (y+dy) * image->pitch + (x+dx) * 4;
Uint8 r = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset);
Uint8 g = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 1);
Uint8 b = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 2);
gray_values[index++] = (0.3 * r) + (0.59 * g) + (0.11 * b);
}
}
qsort(gray_values, 9, sizeof(Uint8), cmpfunc);
Uint8 gray = gray_values[4];
int pixel_offset = y * image->pitch + x * 4;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset) = gray;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 1) = gray;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 2) = gray;
}
}
}
You are filtering with some neighbour values that were already filtered – the three pixels above and one on the left.
You need to create median values in a new image. This must also include the unfiltered pixels around the edges.
If you are applying multiple filters, then use one buffer as the source, and another as the destination, then swap the direction for the next filter application (by passsing two buffers to the filter functions).

How to properly do centered zooming?

My problem is more generic than programming, however it involves some basic C codes, I hope this won't be closed in here.
I have a rounded target display, which will display an image, first centered and fitted:
Circle's radius is 360, that's fixed.
I need to add do zoom-in and out functionality (in case image is larger than target). In this example the above image is 1282x720, so it's well above the circle's size. (To fit into the circle, now it's roughly 313x176)
I would like to do a properly aligned "center-fixed zoom", i.e.: whatever is currently centered shall remain centered after the zoom operation.
Image is put into a component called scroller which has an option to set its offset, i.e. how many pixels shall it skip from top and left of its content. This scroller component is by default aligns its content to top-left corner.
I've put a red dot into the middle of the image, to be easier to follow.
So upon zooming in this happens (image is starting to be left-aligned):
Please note it is still in the middle vertically, as it's stills smaller in height than its container.
However on the next zooming-in step, the red centerpoint will slightly go downwards, as the image in this case has more height than container, hence it's also started being top-aligned:
Now, making it to stay always in center is easy:
I need to ask the scroller to scroll to
image_width/2 - 180, //horizontal skip
image_height/2 - 180 //vertical skip
In this case, if I zoom-in in 5 steps from fitted size to full size, scroller's skip numbers are these:
Step0 (fit): 0, 0
Step1: 73, 0
Step2: 170, 16
Step3: 267, 71
Step4: 364, 125
Step5 (original size): 461, 180
But I don't want the image to stay in center constantly, I'd rather do something similar what image editors are doing, i.e.: center point shall remain in center during zoom operation, than user can pan, and next zoom operation will keep the new center point in center.
How shall I do this?
Target language is C, and there is no additional 3rd party library which is usable, I'll need to do this manually.
Scroller is actually an elm_scroller.
You need to modify all four positions points, not only x2 and y2, think of them as a the sides of a rectangle, so to keep a centered zoom every side of the square needs to "grow" to de absolute center of the image.
X1 > Left , Y1 > Top
X2 > Right , Y2 > Bottom
#include <stdint.h>
#include <stdio.h>
typedef struct {
int32_t x;
int32_t y;
int32_t width;
int32_t heigth;
uint32_t o_width;
uint32_t o_heigth;
} IMG_C_POS;
void set_img_c_pos(IMG_C_POS * co, int32_t w, int32_t h){
co->o_heigth = h;
co->o_width = w;
co->heigth = h;
co->width = w;
co->x = 0;
co->y = 0;
}
void add_img_zoom(IMG_C_POS * co, uint16_t zoom){
uint32_t zoom_y = (co->o_heigth / 100) * (zoom / 2);
uint32_t zoom_x = (co->o_width / 100) * (zoom / 2);
co->heigth -= zoom_y;
co->width -= zoom_x;
co->x += zoom_x;
co->y += zoom_y;
}
void sub_img_zoom(IMG_C_POS * co, uint16_t zoom){
uint32_t zoom_y = (co->o_heigth / 100) * (zoom / 2);
uint32_t zoom_x = (co->o_width / 100) * (zoom / 2);
co->heigth += zoom_y;
co->width += zoom_x;
co->x -= zoom_x;
co->y -= zoom_y;
}
void img_new_center(IMG_C_POS * co, int16_t nx, int16_t ny){
int32_t oy = co->o_heigth / 2;
if(oy <= ny){
co->heigth += oy - ny;
co->y += oy - ny;
} else {
co->heigth -= oy - ny;
co->y -= oy - ny;
}
int32_t ox = co->o_width / 2;
if(ox <= nx){
co->width += ox - nx;
co->x += ox - nx;
} else {
co->width -= ox - nx;
co->x -= ox - nx;
}
}
void offset_img_center(IMG_C_POS * co, int16_t x_offset, int16_t y_offset){
if (y_offset != 0){
int32_t y_m_size = (co->o_heigth / 100) * y_offset;
co->heigth += y_m_size;
co->y += y_m_size;
}
if (x_offset != 0){
int32_t x_m_size = (co->o_width / 100) * x_offset;
co->width += x_m_size;
co->x += x_m_size;
}
}
int main(void) {
IMG_C_POS position;
set_img_c_pos(&position, 1282, 720);
sub_img_zoom(&position, 50);
img_new_center(&position, (1282 / 2) - 300, (720 / 2) + 100);
for (int i = 0; i < 4; i++){
printf("X1 -> %-5i Y1 -> %-5i X2 -> %-5i Y2 -> %-5i \n",
position.x, position.y, position.width, position.heigth
);
offset_img_center(&position, 4, -2);
add_img_zoom(&position, 20);
}
return 0;
}

Backpropagation algorithm doesn't guess MNIST data at all

I have been trying to write an artificial neural network that uses backpropagation and I have been trying to train it to recognize digits by using MNIST data. For xor training, this program can complete the training with 63000 epochs. I use 0.5 learning rate for both the weights and biases. For xor, I make only one hidden layer, and 2 neurons for that layer. It seems to work. But when it coems to MNIST, I use 20 neurons and 1 hidden layer, 3 learning rate for both weights and biases (I also tried smaller or higher learning rates but this looked like a good number) but even with 60000 images, it does not come close to guessing numbers. Sometimes it guesses only one number, like 1 4 9, sometimes it doesn't guess anything. I trained it for 450 epochs but it still didn't change anything. The full source code is here: https://github.com/Fethbita/perceptron/blob/master/backprop_mnist_rewrite.c
I thought that there was a problem with my backpropagation algorithm but I couldn't find any after a weeks search. Can you help me finding my problem?
// testing the images on the network
// I test by looking at all the outputs and check every number to see if any of them is > 0.9 and if it is, I compare it with the known output of the image.
double outputs_test[number_of_outputs + hidden_layers * hidden];
int correct = 0;
for (int i = 0; i < training_data_size; i++)
{
feed_forward(train_x + inputs * i, outputs_test, w, bias);
int maxind = -1;
int num = 0;
for (int j = 0; j < number_of_outputs; j++)
{
int output_n = hidden_layers * hidden + j;
if (outputs_test[output_n] > 0.9 && maxind == -1)
{
maxind = j;
}
else if(outputs_test[output_n] > 0.9 && maxind > -1)
{
maxind = -2;
}
if (train_t[i][j])
{
num = j;
}
}
if (num == maxind)
{
correct++;
}
}
printf("Total correct guesses in this epoch = %d\n\n", correct);
////////////////
I have these function protoypes and functions.
void feed_forward(double* train_input, double* outputs, double* w, double* bias);
int train(double* train_x, double** train_t, double* w, double* bias);
double activation_function(int number_of_weights, const double* w, const double* x, const double* bias);
double sigmoid(double x);
I won't write the train function in it's fullest but here is the part that does the backpropagation;
int train(double* train_x, double** train_t, double* w, double* bias)
// train_x: the address of the inputs. For example for the MNIST data the first 784 double corresponds to the first image, the second to the second image etc.
// train_t: the address that corresponds to the outputs. Each pointer in the array points to an array of doubles that corresponds to the outputs. For the MNIST data the first pointer points to 10 doubles and if the corresponding image is 4, the double array is like this: [0] = 0.0 [1] = 0.0 [2] = 0.0 [3] = 0.0 [4] = 1.0 [5] = 0.0 [6] = 0.0 [7] = 0.0 [8] = 0.0 [9] = 0.0
// w: the addresses of the weights
// bias: the addresses of the biases
{
total_error = 0;
int train_out_index = 0;
double* train_output;
for (double* train_input = train_x; train_input < train_x + training_data_size * inputs; train_input += inputs)
{
train_output = train_t[train_out_index];
feed_forward(train_input, outputs, w, bias);
for (int i = 0; i < number_of_outputs; i++)
{
// calculates the error but I don't understand why we multiply it with 0.5
// If the network guesses every number wrong, the maximum this total_error can be (0.5 for each output neuron, and since there are 10 output neurons, it can be 5 for each input. Since there are 60000 images, it can be 300000 in total. And since we divide it by the number of inputs at the end,) it can be 300000/60000 = 5. But what does this 5 mean? It is 5 if every output the network guesses is wrong.
int output_n = hidden_layers * hidden + i;
total_error += .5 * (train_output[i] - outputs[output_n]) * (train_output[i] - outputs[output_n]);
}
//================= calculates the deltas
// backward pass
memset(deltas, 0, (size_t)total_weights * sizeof(double));
// output layer (hidden to output)
for (int i = number_of_outputs * hidden; i > 0; i--)
{
int a_index = (hidden_layers * hidden) + number_of_outputs - 1 - (i - 1) / hidden;
int target_index = number_of_outputs - 1 - (i - 1) / hidden;
deltas[total_weights - i] = (outputs[a_index] - train_output[target_index]) * outputs[a_index] * (1 - outputs[a_index]);
}
// rest of the layers
for (int i = total_weights - number_of_outputs * hidden - 1; i >= 0; i--)
{
// first hidden layer (input to hidden)
if (i < inputs * hidden && hidden_layers > 1)
{
for (int j = 0; j < hidden; j++)
{
int w_index = inputs * hidden + j * hidden + (i / inputs);
deltas[i] += deltas[w_index] * w[w_index];
}
deltas[i] *= outputs[i / inputs] * (1 - outputs[i / inputs]);
}
// last hidden layer (hidden to hidden) or first hidden layer when there's only one hidden layer
else if ((i > inputs * hidden + hidden * hidden * (hidden_layers - 2) && hidden_layers > 1) || (i < inputs * hidden && hidden_layers == 1))
{
for (int j = 0; j < number_of_outputs; j++)
{
int w_index = (inputs * hidden + hidden * hidden * (hidden_layers - 1)) + j * hidden + (i / inputs);
deltas[i] += deltas[w_index] * w[w_index];
}
int a_j_index;
if (hidden_layers == 1)
{
a_j_index = i / inputs;
}
else
{
a_j_index = (i - inputs * hidden) / hidden + hidden;
}
deltas[i] *= outputs[a_j_index] * (1 - outputs[a_j_index]);
}
// other hidden layers (hidden to hidden)
else
{
for (int j = 0; j < hidden; j++)
{
int next_layer_first_index = ((i - inputs * hidden) / (hidden * hidden) + 1) * hidden * hidden + (inputs * hidden);
int w_index = next_layer_first_index + j * hidden + ((i - inputs * hidden) / hidden) % hidden;
deltas[i] += deltas[w_index] * w[w_index];
}
int a_j_index = (i - inputs * hidden) / hidden + hidden;
deltas[i] *= outputs[a_j_index] * (1 - outputs[a_j_index]);
}
}
//=================
// update the weights
for (int i = 0; i < total_weights; i++)
{
// update biases
if (i <= hidden_layers)
{
if (i == 0)
{
bias[i] -= bias_learning_rate * deltas[0];
}
else
{
bias[i] -= bias_learning_rate * deltas[hidden * inputs + (i - 1) * hidden * hidden];
}
}
// first hidden layer (input to hidden)
if (i < inputs * hidden)
{
deltas[i] *= train_input[i % inputs];
}
// output layer
else if (i >= total_weights - number_of_outputs * hidden)
{
deltas[i] *= outputs[hidden * (hidden_layers - 1) + i % hidden];
}
// other hidden layers (hidden to hidden)
else
{
deltas[i] *= outputs[(((i - hidden * inputs) / hidden) / hidden) * hidden + i % hidden];
}
w[i] -= learning_rate * deltas[i];
}
train_out_index++;
}
}
// Feeds one input to the network
void feed_forward(double* train_input, double* outputs, double* w, double* bias)
// train_input: the address of the input to be feeded
// outputs: outputs of each neuron will be written here
// w: weights of the network
// bias: biases of the network
{
// forward pass
for (int i = 0; i < hidden * hidden_layers; i++)
{
// first layer
if (i < hidden)
{
outputs[i] = sigmoid(activation_function(inputs, w + i * inputs, train_input, bias));
}
// hidden layers
else
{
int w_index = inputs * hidden + (i / hidden - 1) * (hidden * hidden) + (i % hidden) * hidden;
int x_index = (i / hidden - 1) * hidden;
outputs[i] = sigmoid(activation_function(hidden, w + w_index, &outputs[x_index], &bias[i / hidden]));
}
}
// output layer
for (int i = 0; i < number_of_outputs; i++)
{
int output_n = hidden_layers * hidden + i;
int w_index = total_weights + hidden * (i - number_of_outputs);
outputs[output_n] = sigmoid(activation_function(hidden, w + w_index, &outputs[(hidden_layers - 1) * hidden], &bias[hidden_layers]));
}
}
// This function activates one neuron.
double activation_function(int number_of_weights, const double* w, const double* x, const double* bias)
// number_of_weights: the number of weights that is connected to the neuron to be activated
// w: the address of the first weight that is connected to the neuron
// x: the address of the first output of the previous layer
// bias: the address of the bias that is to be added to the weight
{
double a = 0;
for (int i = 0; i < number_of_weights; i++)
{
a += w[i] * x[i];
}
a += *bias;
return a;
}
// Sigmoid function
double sigmoid(double a)
{
return 1 / (1 + exp(-a));
}

Wrong result for non square image

I am trying to implement dark (not exactly)emboss filter, my problem is when I use it on SQUARED Lena image 512x512 result is good.
But when I use it on image which has rectangular shape e.g. 1280x720 result is all messed up, why is it so? Format of images is RGB.
GOOD result with Lena 512x512 (original):
WRONG result with 1280x720 image (original not same size just for comparison):
For a 24bit image, if the width of the image is 682 then it needs padding. Because 682*3 is not a multiple of 4. Try changing the image width to 680 and try again.
To pad the image rows, use the following formula:
int pad = WIDTH % 4;
if(pad == 4) pad = 0;
WIDTH += pad;
Change the condition to fb_j < HEIGHT - 1 - FILTER_HEIGHT and fb_i < WIDTH - 1 - FILTER_WIDTH to avoid buffer overflow.
The bitmap is scanned from top to bottom. It works fine when I switched the dimension as follows (but I loaded the bitmap differently)
//Pixel frame_buffer[WIDTH][HEIGHT];
//Pixel temp_buffer[WIDTH][HEIGHT];
Pixel frame_buffer[HEIGHT][WIDTH];
Pixel temp_buffer[HEIGHT][WIDTH];
...
for(int fb_j = 1; fb_j < HEIGHT - 1 - FILTER_HEIGHT; fb_j++) {
for(int fb_i = 1; fb_i < WIDTH - 1 - FILTER_WIDTH; fb_i++) {
float r = 0, g = 0, b = 0;
for(int ker_i = 0; ker_i < FILTER_WIDTH; ker_i++) {
for(int ker_j = 0; ker_j < FILTER_HEIGHT; ker_j++) {
r += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].r / 255.0) * emboss_kernel[ker_j][ker_i]);
g += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].g / 255.0) * emboss_kernel[ker_j][ker_i]);
b += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].b / 255.0) * emboss_kernel[ker_j][ker_i]);
}
}
if(r > 1.0) r = 1.0;
else if(r < 0) r = 0;
if(g > 1.0) g = 1.0;
else if(g < 0) g = 0;
if(b > 1.0) b = 1.0;
else if(b < 0) b = 0;
// Output buffer which will be rendered after convolution
temp_buffer[fb_j][fb_i].r = (GLubyte)(r*255.0);
temp_buffer[fb_j][fb_i].g = (GLubyte)(g*255.0);
temp_buffer[fb_j][fb_i].b = (GLubyte)(b*255.0);
}
}
Also try running a direct copy for testing. Example:
temp_buffer[fb_j][fb_i].r = frame_buffer[fb_j][fb_i].r;
temp_buffer[fb_j][fb_i].g = frame_buffer[fb_j][fb_i].g;
temp_buffer[fb_j][fb_i].b = frame_buffer[fb_j][fb_i].b;

Ray Tracing calculation in C

I'm new to ray tracing and trying to program one in C. But My program keep on showing a dot (around 1-3 pixel) of the sphere in the wrong places and now I'm confused. This feels like a very stupid question, but I'm confused about exactly how big is 1 radius of a sphere? What I mean by that is if the radius is 1, the circle is 2 pixels?
I know all the calculations and I triple checked if I had any errors in my codes. but just incase, here is part of my codes:
Directions:
//size: 1024x768, view point (512 384 1), screen (0 0 0) to (1024 768 0)
ray[0] = x - start_x;
ray[1] = y - start_y;
ray[2] = 0 - start_z;
//normalize
double length;
length = (sqrt((ray[0]*ray[0]) + (ray[1]*ray[1]) + (ray[2]*ray[2])));
ray[0] = ray[0]/length;
ray[1] = ray[1]/length;
ray[2] = ray[2]/length;
Intersection:
temp = top; //my struct with sphere data, _x, _y, _z, _r, _red, _green, _blue
//x and y is the current pixel value
while (temp != NULL) {
x_diff = start_x - temp->_x + 0.0;
y_diff = start_y - temp->_y + 0.0;
z_diff = start_z - temp->_z + 0.0;
//a = 1 because my direction is a normalized
b = 2.0 * ((rayVector[0] * x_diff) + (rayVector[1] * y_diff) + (rayVector[2] * z_diff));
c = (x_diff * x_diff * 1.0) + (y_diff * y_diff) + (z_diff * z_diff) - (temp->_r * temp->_r);
check = (b * b) - (4.0 * c);
if (check < 0) { //0
pixels[width][height][0] = 0.0;
pixels[width][height][1] = 0.0;
pixels[width][height][2] = 0.0;
}
else if (check == 0) { //1
r1 = (b * -1.0) /2.0;
if (r1 < nearest_z) {
nearest_z = r1;
pixels[width][height][0] = temp->_red;
pixels[width][height][1] = temp->_green;
pixels[width][height][2] = temp->_blue;
}
}
else { //2
r1 = ((b * -1.0) + sqrt(check))/2.0;
r2 = ((b * -1.0) - sqrt(check))/2.0;
if ((r1 < r2) && (r1 < nearest_z)) {
nearest_z = r1;
pixels[width][height][0] = 255.0;
pixels[width][height][1] = 0;
pixels[width][height][2] = 0;
}
else if ((r2 < r1) && (r2 < nearest_z)) {
nearest_z = r2;
pixels[width][height][0] = temp->_red;
pixels[width][height][1] = temp->_green;
pixels[width][height][2] = temp->_blue;
}
}
temp = temp->next;
}
I haven't done any lightings yet since the flat colouring it doesn't work. I'm new to openGL so expect me to miss some common functions in the codes. Thanks in advance.
Edit:
I only have one sphere currently, but my output looks like: img1
I was expecting a bigger circle? Also, I had a printf for each intersection (if there is) and when I manually plot in a paper, it is a 4x5 pixel square. But there are 4 dots in the output.
Edit 2: I change the size of the sphere to: x = 512 y = 384 z = -21 r = 30, it gave me this:
img2
Again, I only have one sphere and there are 4 in the image. Also, there are holds between the lines?
If I change the z value to -20, now my output is all white (colour of sphere).
I use glDrawPixels(1024,768,GL_RGB,GL_FLOAT,pixels); to draw
I had a RBG output file, everything seems to be in the right place. but when I draw on the program, it is off.

Resources