Method to apply Sketch effect and Cartoon effect - c

I would like to know where I can find 3 good algorithms, or 3 examples of C code to :
apply a sketch effect on an image (black and white)
apply a sketch effect on an image (color)
apply a cartoon effect on an image (color)
I already have a code for the black and white sketch, but it's too white..
void sketchFilter(int *pixels, int width, int height) {
changeImageToGray(pixels, width, height);
int *tempPixels = new int[width * height];
memcpy(tempPixels, pixels, width * height * sizeof(int));
int threshold = 7;
float num = 8;
for (int i = 1; i < height - 1; i++) {
for (int j = 1; j < width - 1; j++) {
Color centerColor(tempPixels[i * width + j]);
int centerGray = centerColor.R();
Color rightBottomColor(tempPixels[(i + 1) * width + j + 1]);
int rightBottomGray = rightBottomColor.R();
if (abs(centerGray - rightBottomGray) >= threshold) {
pixels[i * width + j] = RGB2Color(0, 0, 0); // black
}
else {
pixels[i * width + j] = RGB2Color(255, 255, 255); // white
}
}
}
delete [] tempPixels;
}
Any way to improve this code, or should I go with a completely different?
How can I do both color cartoon (doodle?) and color sketch?
Thanks!

The cartoon-like style is described in this page; it can be achieved via a smoothing followed by a(n aggressive) posterize effect. Both can be carried out using OpenCV, using any filter for the smotthing and PyrMeanShiftFiltering for the posterize.
Edit:
The pencil (colour) sketch is described f.e. in this StackOverflow question.

Related

OCR : why are my results worse when i apply a median filter?

I am currently developing an OCR for a sudoku and i am trying to first get a clean black and white image. I first apply a grayscale then a median filter then an otsu algorithm.
My problem is that my results are better when i dont apply my median filter.
Does anyone know why ?
starting image
with my median filter
without my median filter
here is the code for my median filter :
void median_filter(SDL_Surface *image) {
int width = image->w;
int height = image->h;
for (int y = 1; y < height - 1; y++) {
for (int x = 1; x < width - 1; x++) {
Uint8 gray_values[9];
int index = 0;
for (int dy = -1; dy <= 1; dy++) {
for (int dx = -1; dx <= 1; dx++) {
int pixel_offset = (y+dy) * image->pitch + (x+dx) * 4;
Uint8 r = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset);
Uint8 g = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 1);
Uint8 b = *(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 2);
gray_values[index++] = (0.3 * r) + (0.59 * g) + (0.11 * b);
}
}
qsort(gray_values, 9, sizeof(Uint8), cmpfunc);
Uint8 gray = gray_values[4];
int pixel_offset = y * image->pitch + x * 4;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset) = gray;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 1) = gray;
*(Uint8 *)((Uint8 *)image->pixels + pixel_offset + 2) = gray;
}
}
}
You are filtering with some neighbour values that were already filtered – the three pixels above and one on the left.
You need to create median values in a new image. This must also include the unfiltered pixels around the edges.
If you are applying multiple filters, then use one buffer as the source, and another as the destination, then swap the direction for the next filter application (by passsing two buffers to the filter functions).

Blur filter in C results in only a slightly changed image

i am trying to make a blur filter in c that takes the neighboring pixels of the main pixel, takes the avarage of the rgb values and stores it in the temp array, them changes the image using the temp array values, it seems correct but it is not working as intended, giving an output of a very slightly blured image. I realy dont see my mistake and would be very thankful if someone helped, sorry if i made something horrible, started learning c last week.
i checked this post
Blurring an Image in c pixel by pixel - special cases
but i did not see were i went wrong.
im working with this data struct
BYTE rgbtBlue;
BYTE rgbtGreen;
BYTE rgbtRed;
void blur(int height, int width, RGBTRIPLE image[height][width])
{
// ints to use later
int j;
int p;
RGBTRIPLE temp[height][width];
for(int n = 0; n < height; n++) // loop to check every pixel
{
for(int k = 0; k < width; k++)
{
int widx = 3;
int hghtx = 3;
// conditionals for border cases
int y = 0;
if(n == 0)
{
p = 0;
hghtx = 2;
}
if(n == height - 1)
{
p = -1;
hghtx = 2;
}
if(k == 0)
{
j = 0;
widx = 2;
}
if(k == width - 1)
{
j = -1;
widx = 2;
}
for(int u = 0; u < hghtx; u++) // matrix of pixels around the main pixel using the conditionals gathered before
for(int i = 0; i < widx; i++)
if(y == 1) // takes the average of color and stores it in the RGB temp
{
temp[n][k].rgbtGreen = temp[n][k].rgbtGreen + image[n + p + u][k + j + i].rgbtGreen / (hghtx * widx);
temp[n][k].rgbtRed = temp[n][k].rgbtRed + image[n + p + u][k + j + i].rgbtRed / (hghtx * widx);
temp[n][k].rgbtBlue = temp[n][k].rgbtBlue + image[n + p + u][k + j + i].rgbtBlue / (hghtx * widx);
}
else // get first value of temp
{
temp[n][k].rgbtGreen = (image[n + p + u][k + j + i].rgbtGreen) / (hghtx * widx);
temp[n][k].rgbtRed = (image[n + p + u][k + j + i].rgbtRed) / (hghtx * widx);
temp[n][k].rgbtBlue = (image[n + p + u][k + j + i].rgbtBlue) / (hghtx * widx);
y++;
}
}
}
// changes the original image to the blured one
for(int n = 0; n < height; n++)
for(int k = 0; k < width; k++)
image[n][k] = temp[n][k];
}
I think it's a combination of things.
If the code worked the way you expect, you would be still doing a blur of just 3x3 pixels and that can be hardly noticeable, especially on large images (I'm pretty sure it will be unnoticeable on an image 4000x3000 pixels)
There are some problems with the code.
As #Fe2O3 says, at the end of the first line, widx will change to 2 and stay 2 for the rest of the image.
you are reading from temp[][] without initializing it. I think that if you compile that in release mode (not debug), temp[][] will contain random data and not all zeros as you probably expect. (as #WeatherWane pointed out)
The way you calculate the average of the pixels is weird. If you use a matrix 3x3 pixels, each pixel value shoud be divided by 9 in the final sum. But you divide the first pixel nine times by 2 (in effect doing /256), the second one eight times by 2 (so its pixel/128) etc. until the last one is divided by 2. So basically, it's mostly the value of the bottom right pixel.
also, since your RGB values are just bytes, you may want to divide them first and only then add them, because otherwise, you'll get overflows with wild results.
Try using a debugger to see the values you are actually calculating. It can be quite an eye opener :)

How to properly do centered zooming?

My problem is more generic than programming, however it involves some basic C codes, I hope this won't be closed in here.
I have a rounded target display, which will display an image, first centered and fitted:
Circle's radius is 360, that's fixed.
I need to add do zoom-in and out functionality (in case image is larger than target). In this example the above image is 1282x720, so it's well above the circle's size. (To fit into the circle, now it's roughly 313x176)
I would like to do a properly aligned "center-fixed zoom", i.e.: whatever is currently centered shall remain centered after the zoom operation.
Image is put into a component called scroller which has an option to set its offset, i.e. how many pixels shall it skip from top and left of its content. This scroller component is by default aligns its content to top-left corner.
I've put a red dot into the middle of the image, to be easier to follow.
So upon zooming in this happens (image is starting to be left-aligned):
Please note it is still in the middle vertically, as it's stills smaller in height than its container.
However on the next zooming-in step, the red centerpoint will slightly go downwards, as the image in this case has more height than container, hence it's also started being top-aligned:
Now, making it to stay always in center is easy:
I need to ask the scroller to scroll to
image_width/2 - 180, //horizontal skip
image_height/2 - 180 //vertical skip
In this case, if I zoom-in in 5 steps from fitted size to full size, scroller's skip numbers are these:
Step0 (fit): 0, 0
Step1: 73, 0
Step2: 170, 16
Step3: 267, 71
Step4: 364, 125
Step5 (original size): 461, 180
But I don't want the image to stay in center constantly, I'd rather do something similar what image editors are doing, i.e.: center point shall remain in center during zoom operation, than user can pan, and next zoom operation will keep the new center point in center.
How shall I do this?
Target language is C, and there is no additional 3rd party library which is usable, I'll need to do this manually.
Scroller is actually an elm_scroller.
You need to modify all four positions points, not only x2 and y2, think of them as a the sides of a rectangle, so to keep a centered zoom every side of the square needs to "grow" to de absolute center of the image.
X1 > Left , Y1 > Top
X2 > Right , Y2 > Bottom
#include <stdint.h>
#include <stdio.h>
typedef struct {
int32_t x;
int32_t y;
int32_t width;
int32_t heigth;
uint32_t o_width;
uint32_t o_heigth;
} IMG_C_POS;
void set_img_c_pos(IMG_C_POS * co, int32_t w, int32_t h){
co->o_heigth = h;
co->o_width = w;
co->heigth = h;
co->width = w;
co->x = 0;
co->y = 0;
}
void add_img_zoom(IMG_C_POS * co, uint16_t zoom){
uint32_t zoom_y = (co->o_heigth / 100) * (zoom / 2);
uint32_t zoom_x = (co->o_width / 100) * (zoom / 2);
co->heigth -= zoom_y;
co->width -= zoom_x;
co->x += zoom_x;
co->y += zoom_y;
}
void sub_img_zoom(IMG_C_POS * co, uint16_t zoom){
uint32_t zoom_y = (co->o_heigth / 100) * (zoom / 2);
uint32_t zoom_x = (co->o_width / 100) * (zoom / 2);
co->heigth += zoom_y;
co->width += zoom_x;
co->x -= zoom_x;
co->y -= zoom_y;
}
void img_new_center(IMG_C_POS * co, int16_t nx, int16_t ny){
int32_t oy = co->o_heigth / 2;
if(oy <= ny){
co->heigth += oy - ny;
co->y += oy - ny;
} else {
co->heigth -= oy - ny;
co->y -= oy - ny;
}
int32_t ox = co->o_width / 2;
if(ox <= nx){
co->width += ox - nx;
co->x += ox - nx;
} else {
co->width -= ox - nx;
co->x -= ox - nx;
}
}
void offset_img_center(IMG_C_POS * co, int16_t x_offset, int16_t y_offset){
if (y_offset != 0){
int32_t y_m_size = (co->o_heigth / 100) * y_offset;
co->heigth += y_m_size;
co->y += y_m_size;
}
if (x_offset != 0){
int32_t x_m_size = (co->o_width / 100) * x_offset;
co->width += x_m_size;
co->x += x_m_size;
}
}
int main(void) {
IMG_C_POS position;
set_img_c_pos(&position, 1282, 720);
sub_img_zoom(&position, 50);
img_new_center(&position, (1282 / 2) - 300, (720 / 2) + 100);
for (int i = 0; i < 4; i++){
printf("X1 -> %-5i Y1 -> %-5i X2 -> %-5i Y2 -> %-5i \n",
position.x, position.y, position.width, position.heigth
);
offset_img_center(&position, 4, -2);
add_img_zoom(&position, 20);
}
return 0;
}

How to draw a zigzag line?

I am creating a document based application and i want to draw a horizontal line underlying the text. But, line should not be straight. i want to draw a line like this.
Currently i am using System.Graphics object to draw any object.
private void DrawLine(Graphics g, Point Location, int iWidth)
{
iWidth = Convert.ToInt16(iWidth / 2);
iWidth = iWidth * 2;
Point[] pArray = new Point[Convert.ToInt16(iWidth / 2)];
int iNag = 2;
for (int i = 0; i < iWidth; i+=2)
{
pArray[(i / 2)] = new Point(Location.X + i , Location.Y + iNag);
if (iNag == 0)
iNag = 2;
else
iNag = 0;
}
g.DrawLines(Pens.Black, pArray);
}
UPDATED:
Above code is working fine and line draws perfectly but, this code effects on application performance. Is there another way to do this thing.
If you want fast drawing just make a png image of the line you want, with width larger than you need and then draw the image:
private void DrawLine(Graphics g, Point Location, int iWidth)
{
Rectangle srcRect = new Rectangle(0, 0, iWidth, zigzagLine.Height);
Rectangle dstRect = new Rectangle(Location.X, Location.Y, iWidth, zigzagLine.Height);
g.DrawImage(zigzagLine, dstRect, srcRect, GraphicsUnit.Pixel);
}
zigzagLine is the bitmap.
valter

WinForms control to setup clickable areas in a UserControl

Need to divide a UserControl with a background picture into multiple small clickable areas. Clicking them should simply raise an event, allowing to determine which particular area of the picture was clicked.
The obvious solution is using transparent labels. However, they are heavily flickering. So it looks like labels are not designed for this purpose, they take too much time to load.
So I'm thinking if any lighter option exists? To logically "slice up" the surface.
I also need a border around the areas, though.
on the user control do:
MouseClick += new System.Windows.Forms.MouseEventHandler(this.UserControl1_MouseClick);
and now in the UserControl1_MouseClick event do:
private void UserControl1_MouseClick(object sender, MouseEventArgs e)
{
int x = e.X;
int y = e.Y;
}
now let's divide the user control to a 10x10 area:
int xIdx = x / (Width / 10);
int yIdx = y / (Height / 10);
ClickOnArea(xIdx, yIdx);
in ClickOnArea method you just need to decide what to do in each area. maybe using a 2d array of Action
as for the border do this:
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
Graphics g = e.Graphics;
Pen p = new Pen(Color.Black);
float xIdx = (float)(Width / 10.0);
float yIdx = (float)(Height / 10.0);
for (int i = 0; i < 10; i++)
{
float currVal = yIdx*i;
g.DrawLine(p, 0, currVal, Width, currVal);
}
g.DrawLine(p, 0, Height - 1, Width, Height - 1);
for (int j = 0; j < 10; j++)
{
float currVal = xIdx * j;
g.DrawLine(p, currVal, 0, currVal, Height);
}
g.DrawLine(p, Width - 1, 0, Width - 1, Height);
}

Resources