Stuck in a lab Task (image processing) - c

Hi everyone I am new to programming and I am first year in university cs.
my question Is that I am writing a program that screens simple images looking
for anomalies (indicated by excessive patterns of red). The program should load a
file and then print out whether or not the image contains more than a certain percentage
of intensive red pixels.
so far I have the following code:
#include <stdio.h>
#include "scc110img.h"
int main()
{
unsigned char* imageArray = LoadImage("red.bmp");
int imageSize =GetSizeOfImage();
int image;
for (image = 0; image<imageSize; image++);
printf("%d\n, imageArray[image]");
}
my question is how can I modify the program o that it prints out the amount of blue, green and red.
something like;
blue value is 0, green value is 0, red value is 0.

You have a byte array (unsigned char) that represents the bytes of your image. Currently you are printing them out one byte at a time.
So to know how to get the individual rgb values you need to know how they were stored.
Its as easy as that, but don't expect someone here to do it for you.

This code is really incomplete. We don't know what your LoadImage() or GetSizeOfImage() function does but one thing is sure is that the way you are representing image in you C program is definitely not the way it is represented. A 'bmp' image has several parts and you should find out the correct way to represent it as a struct. Then you can traverse through it pixel by pixel.
I would suggest using a library pre-written such as 'libbmp' to make your task easy.

Related

How is frame data stored in libav?

I am trying to learn to use libav. I have followed the very first tutorial on dranger.com, but I got a little confused at one point.
// Write pixel data
for(y=0; y<height; y++)
fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);
This code clearly works, but I don't quite understand why, particulalry I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use, why pFrame->data and pFrame->linesize is always referenced at index 0, and why we are adding y to pFrame->data[0].
In the tutorial it says
We're going to be kind of sketchy on the PPM format itself; trust us, it works.
I am not sure if writing it to the ppm format is what is causing this process to seem so strange to me. Any clarification on why this code is the way it is and how libav stores frame data would be very helpful. I am not very familiar with media encoding/decoding in general, thus why I am trying to learn.
particularly I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use
Yes, It depends on the pix_fmt value. Some formats are planar and others are not.
why pFrame->data and pFrame->linesize is always referenced at index 0,
If you look at the struct, you will see that data is an array of pointers/a pointer to a pointer. So pFrame->data[0] is a pointer to the data in the first "plane". Some formats, like RGB have a singe plane, where all data is stored in one buffer. Other formats like YUV, use a separate buffer for each plane. e.g. Y = pFrame->data[0], U = pFrame->data[1], pFrame->data[3] Audio may use one plane per channel, etc.
and why we are adding y to pFrame->data[0].
Because the example is looping over an image line by line, top to bottom.
To get the pointer to the fist pixel of any line, you multiply the linesize by the line number then add it to the pointer.

SDL pixel has wrong values

I am trying to create a image processing software.
I get some weird results trying to create an Unsharp Mask effect.
I will attach my code here and I will explain what it does and where the problems are (or at least , where I think they are):
void unsharpMask(SDL_Surface* inputSurface,SDL_Surface* outputSurface)
{
Uint32* pixels = (Uint32*)inputSurface->pixels;
Uint32* outputPixels=(Uint32*)outputSurface->pixels;
Uint32* blurredPixels=(Uint32*)blurredSurface->pixels;
meanBlur(infoSurface,blurredSurface);
for (int i=0;i<inputSurface->h;i++)
{
for(int j=0;j<inputSurface->w;j++)
{
Uint8 rOriginal,gOriginal,bOriginal;
Uint8 rBlurred,gBlurred,bBlurred;
Uint32 rMask,gMask,bMask;
Uint32 rFinal,gFinal,bFinal;
SDL_GetRGB(blurredPixels[i*blurredSurface->w+j],blurredSurface->format,&rBlurred,&gBlurred,&bBlurred);
SDL_GetRGB(pixels[i*inputSurface->w+j],inputSurface->format,&rOriginal,&gOriginal,&bOriginal);
rMask=rOriginal - rBlurred;
rFinal=rOriginal + rMask;
if(rFinal>255) rFinal=255;
if(rFinal<=0) rFinal=0;
gMask=gOriginal - gBlurred;
gFinal=gOriginal + gMask;
if(gFinal>255) gFinal=255;
if(gFinal<0) gFinal=0;
bMask=bOriginal - bBlurred;
bFinal=bOriginal + bMask;
if(bFinal>255) bFinal=255;
if(bFinal<0) bFinal=0;
Uint32 pixel =SDL_MapRGB(outputSurface->format,rFinal,gFinal,bFinal);
outputPixels[i *outputSurface->w+j]=pixel;
}
}
}
So, as you can see, my function gets 2 arguments: the image source(from which pixel data will be extracted, and a target, where the image will be projected). I blur the original image, then i subtract the RGB value of the blurred image from the source image to get "the mask" and then , i add the mask to the original image and that's it. I added some clamping to make sure everything stays in the correct range and then I draw every pixel resulted on the output surface. All these surfaces have been converted in an SDL_PIXELFORMAT_ARGB8888 . The output surface is loaded into a texture (also SDL_PIXELFORMAT_ARGB8888) and rendered on the screen.
The results are pretty good in 90% of the image, I get the effect I want, however , there are some pixels that look weird in some places.
Original:
Result:
I tried to fix this in any possible way I knew. I thought is a format problem and played with the pixel bit depth , but I couldn't get to a good result. What i found is that all the values > 255 are negative values and I tried to make them completely white. And it works for the skies ,for example, but if you can see on my examples, the dark values, on the grass are also affected, which makes this not a good solution.
I also get this kind of wrong pixels when I want to add a contrast or do sharpen using kernel convolution, and the values are really bright/dark.
In my opinion there may be a problem with the pixel format, but I'm not sure if that's true.
Is there anyone that had this kind of problem before or knows a potential solution?

C create a txt.ppm file from a safed textfile

PPM1
Textfile
I tried create a C code, that can create a ppm, like on the picture 1 from a textfile like on picture 3. When somemone can help, it where great. I am a new Programmer, i tried do do that Code for 6h. I tried to scan in the data from the textfile and put it in a array and try to make withe that a ppm, but my code is unusable:/.
The path forward is to split the task into smaller sub-tasks, solve and test each one of them separately, and only after they all work, combine them into a single program.
Because the OP has not posted any code, I will not post any directly useful code either. If OP is truly blocked due to not getting any forward progress even after trying, this should actually be of practical use. If OP is just looking for someone to do their homework, this should annoy them immensely. Both work for me. :)
First sub-task is to read the input in an array. There are several examples on the web, and related questions here. You'll want to put this in a separate function, so merging into the complete project later on is easier. Since you are a beginner programmer, you could go for a function like
int read_numbers(double data[], int max);
so that the caller declares the maximum number of data points, and the function returns the number of data points read; or negative if an error occurs. Your main() for testing that function should be trivial, say
#define MAX_NUMBERS 500
int main(void)
{
double x[MAX_NUMBERS];
int i, n;
n = read_numbers(x, MAX_NUMBERS, stdin);
if (n <= 0) {
fprintf(stderr, "Error reading numbers from standard input.\n");
return EXIT_FAILURE;
}
printf("Read %d numbers:\n", n);
for (i = 0; i < n; i++)
printf("%.6f\n", x[i]);
return EXIT_SUCCESS;
}
The second sub-task is to generate a PPM image. PPM is actually a group of closely related image formats, also called Netpbm formats. The example image is a bitmap image -- black and white only; no colors, no shades of gray --, so the PBM format (or variant of PPM) is suitable for this.
I suspect it is easiest to attack this sub-task by using a two-dimensional array, sized for the largest image you can generate (i.e. unsigned char bitmap[HEIGHT_MAX][WIDTH_MAX];), but note that you can also just use a part of it. (You could also generate the image on the fly, without any array, but that is much more error prone, and not as universally applicable as using an array to store the image is.)
You'll probably need to decide the width of the bitmap based on the maximum data value, and the height of the bitmap based on the number of data points.
For testing, just fill the array with some simple patterns, or maybe just a diagonal line from top left corner to bottom right corner.
Then, consider writing a function that sets a rectangular area to a given value (0 or 1). Based on the image, you'll also need a function that draws vertical dotted lines, changing (exclusive-OR'ing) the state of each bit. For example,
#define WIDTH_MAX 1024
#define HEIGHT_MAX 768
unsigned char bitmap[HEIGHT_MAX][WIDTH_MAX];
int width = 0;
int height = 0;
void write_pbm(FILE *out); /* Or perhaps (const char *filename)? */
void fill_rect(int x, int y, int w, int h, unsigned char v);
void vline_xor(int x, int y, int h);
At this point, you should have realized that the write_pbm() function, the one that saves the PBM image, should be written and tested first. Then, you can use the fill_rect() function to not just draw filled rectangles, but also to initialize the image -- the portion of the array you are going to use -- to a background color (0 or 1). All of the three functions above you can, and should, do in separate sub-steps, so that at any point you can rely on that the code you've written earlier is correct and tested. That way, you only need to look at bugs in the code you have written since the last successful testing! It might sound like a slow way to progress, but it actually turns out to be the fastest way to get code working. You very quickly start to love the confidence the testing gives you, and the fact that you only need to focus and worry about one thing at a time.
The third sub-task is to find out a way to draw the rectangles and vertical dotted lines, for various inputs.
(I cheated a bit, above, and included the fill_rect() and vline_xor() functions in the previous sub-task, because I could tell those are needed to draw the example picture.)
The vertical dotted lines are easiest to draw afterwards, using a function that draws a vertical line, leaving every other pixel untouched, but exclusive-ors every other pixel. (Hint: for (y = y0; y < y0 + height; y += 2) bitmap[y][x] ^= 1;)
That leaves the filled rectangles. Their height is obviously constant, and they have a bit of vertical space in between, and they start at the left edge; so, the only thing, is to calculate how wide each rectangle needs to be. (And, how wide the entire bitmap should be, and how tall, as previously mentioned; the largest data value, and the number of data values, dictates those.)
Rather than writing one C source file, and adding to it every step, I recommend you write a separate program for each of the sub-steps. That is, after every time you get a sub-part working, you save that as a separate file, and keep it for reference -- a back-up, if you will. If you lose your way completely, or decide on another path for solving some problem, you only need to revert back to your last reference version, rather than start from scratch.
That kind of work-flow is known to be reliable, efficient, and scalable: it does not matter how large the project is, the above approach works. Of course, there are tools to help us do this in an easy, structured manner (with comments on each commit -- completed unit of code, if you will); and git is a popular, free, but very powerful one. Just do a web search for git for beginners if you are interested in it.
If you bothered to read all through the above wall of text, and grasped the work flow it describes, you won't have much trouble learning how to use tools like git to help you with the workflow. You'll also love how much typing tools like make (and the Makefiles containing the make recipes) help, and how easy it is to make and maintain projects that not only work, but also look professional. Yet, don't try to grasp all of it at once: take it one small step at a time, and verify you have a solid footing, before going onwards. That way, when you stumble, you won't hurt yourself; just learn.
Have fun!

Strange behavior with Matlab array

I am having some trouble manually creating a histogram of intensity values from a grayscale image. Below is the code that I am using the create the bins for the plot that I want to create. The code works fine for every bin except for the last two. For some reason if the intensity is 254 or 255 it puts both values into the 254 bin and no values are accumulated in the 255 bin.
bins= zeros(1,256);
[x,y]=size(grayImg);
for i = 1:x
for j = 1:y
current = grayImg(i,j);
bins(current+1) = bins(current+1) + 1;
end
end
plot(bins);
I do not understand why this behavior is happening. I have printed out the count of 254 intensities and 255 intensities and they are both correct. However, when using the above code to accumulate the intensity values it does not work correctly.
Edit: Added the image I am using, the incorrect graph(the one I get with above code), and the correct one
A. The first problem with your code is the initial definition of bins. It seems that you come from C or somthing like that, but the definition should be- bins=zeros(1,256);
B. The second point is that you don't need the nested loop, you have a matlab function especially for that:
bins=hist(grayImg(:),1:256); % now, you don't need the pre-definition for 'bins'.
plot(bins);
C. Try to use functions like bar or imhist or hist(grayImg(:)), it may save you all this, and give a nice plot.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

Resources