XGetImage takes a lot of time to run - c

XGetImage takes 3-4 seconds to execute and completely freezes X11
Display *display;
display = XOpenDisplay(NULL);
if (!display) {fprintf(stderr, "unable to connect to display");return 7;}
Window w;
int x,y,i;
unsigned m;
Window root = XDefaultRootWindow(display);
if (!root) {fprintf(stderr, "unable to open rootwindow");return 8;}
//sleep(1);
if(!XQueryPointer(display,root,&root,&w,&x,&y,&i,&i,&m))
{ printf("unable to query pointer\n"); return 9;}
XImage *image;
XWindowAttributes attr;
XGetWindowAttributes(display, root, &attr);
image = XGetImage(display,root,0,0,attr.width,attr.height,AllPlanes,XYPixmap);
XCloseDisplay(display);
if (!image) {printf("unable to get image\n"); return 10;}
In the Xorg log:
[ 13234.693] AUDIT: Thu Jan 7 20:12:13 2016: 3856: client 45 connected from local host ( uid=500 gid=500 pid=12993 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 153
[ 13238.774] AUDIT: Thu Jan 7 20:12:18 2016: 3856: client 45 disconnected
time:
real 0m4.080s
user 0m0.002s
sys 0m0.007s
Ideally I want this function to run in less than 0.1 seconds

XYPixmap is a very specialized format that doesn't have many uses. You should use ZPixmap nearly always.
XYPixmap works plane by plane. What does it mean? Take bit 0 of every pixel, and tightly pack all these bits in an array of unsigned int. That's youir plane 0. Then take bit 1 of every pixel, and pack all these bits in an array. That's your plane 1. Then take bit 2 of every pixel...
Framebuffer
__________________________________________________________________
/
Pixel 0 Pixel 1 Pixel 2
[0][1][2][3][4][5][6][7] [0][1][2][3][4][5][6][7] [0][1][2]....
| | |
| +------------------------+ |
| | |
| | +--------------------------------------------------+
| | |
v v v
[0][0][0]..... \
(Plane 0) |
|
[1][1][1].... | Result
(Plane 1) |
.... |
[7][7][7].... |
(Plane 7) |
/
If your framebuffer is stored like this, which is the case for most modern hardware, that's a lot of bit manipulation!
The picture shows 8 bit pixels, but it's the same for any other depth.
ZPixmap on the other hand takes entire pixels and stuffs them into an array:
Framebuffer
__________________________________________________________________
/
Pixel 0 Pixel 1 Pixel 2
[0][1][2][3][4][5][6][7] [0][1][2][3][4][5][6][7] [0][1][2]....
| | | | | | | | | | | | | | | | | | |
v v v v v v v v v v v v v v v v v v v
[0][1][2][3][4][5][6][7] [0][1][2][3][4][5][6][7] [0][1][2]....
\_____________________________________________________________________
Result
This is simple direct copying, which should be very fast.

Related

draw a sine wave using c?

#include <conio.h>
#include <math.h>
#include <graphics.h>
#include <dos.h>
int main() {
int gd = DETECT, gm;
int angle = 0;
double x, y;
initgraph(&gd, &gm, "C:\\TC\\BGI");
line(0, getmaxy() / 2, getmaxx(), getmaxy() / 2);
/* generate a sine wave */
for(x = 0; x < getmaxx(); x+=3) {
/* calculate y value given x */
y = 50*sin(angle*3.141/180);
y = getmaxy()/2 - y;
/* color a pixel at the given position */
putpixel(x, y, 15);
delay(100);
/* increment angle */
angle+=5;
}
getch();
/* deallocate memory allocated for graphics screen */
closegraph();
return 0;
}
This is the program. Why are we incrementing the angle and how this angle is relevant to graph? I changed the value of angle to 0 and the wave became a straight line. I want to know what is happening with this increment.
Why are we incrementing the angle and how this angle is relevant to graph
The sine function takes an angle as argument, typically in radiant. The program implements the angle in degrees, so it's getting scaled to radiant the moment is gets passed to sin().
The sine function is periodical to (repeats itself after) 2*pi or 360 degrees:
+---------+---------+------------+
| angle | sin(angle) |
+---------+---------+ |
| Radiant | Degrees | |
+---------+---------+------------+
| 0 | 0 | 0 |
+---------+---------+------------+
| 1/2*pi | 90 | 1 |
+---------+---------+------------+
| pi | 180 | 0 |
+---------+---------+------------+
| 3/2*pi | 270 | -1 |
+---------+---------+------------+
| 2*pi | 360 | 0 |
+---------+---------+------------+
| 5/2*pi | 450 | 1 |
+---------+---------+------------+
| 3*pi | 540 | 0 |
+---------+---------+------------+
| 7/2*pi | 630 | -1 |
+---------+---------+------------+
| 4*pi | 720 | 0 |
+---------+---------+------------+
| ... | ... | ... |
and so on ...
changed the value of angle to 0 and the wave became a straight line
The result of sin(0) is 0.
For the mathematical derivation you might like to have a look here.

Bitmap scaling while keeping centered

I'm developing an image viewer on embedded system(stm32f4) with multi-touch support. To put into perspective image viewer should have similar functionality like something you find on your smartphone.
I have completed the image scaling and pinch/zoom gesture recognition part. But it only scales from the source image origin coordinates. So if my origin(x,y[0, 0]) is top left then it will scale following that point. And if I want to look something up in bottom right corner I have to use pinch gesture and the move to location I want which is undesirable.Here's how it looks
http://i.imgur.com/IWR4wls.gifv
. It should follow center of 2 fingers.
How can I achieve scaling while following center of 2 fingers? My attempts to do it resulted in working but jumpy, shaky version basically unusable.
My scaling works by having source image always opened(in ram) taking source image [x,y] coordinates and rect i want to zoom [w,h] scaling that rect to display [w,h] and displaying it. Moving(pan gesture) is done by moving zoomRect [x,y] coordinates in source image. That means every time I move finger I have to move zoomedRect (increase [x,y]) scale that zoomedRect and display it. So fully scaled image is not being stored because of limited ram.
Source image width,height[640, 480]:
+-------------------------------+
| zoomedRect |
| +--------------+ |
| | | |
| | | |
| | | |
| | | |
| | | |
| +--------------+ |
| |
| |
| |
+-------------------------------+
I take zoomedRect i.e. x,y[50, 50] width,height[160, 120]
and scale it to display size w,h[640x480]
Display:
+-------------------------------+
| |
| |
| |
| |
| |
| |
| |
| |
| |
+-------------------------------+
Here's what I have/can calculate:
Center of 2 fingers.
Translate center of 2 fingers to source image(even when zoomed in).
Scale in fraction(1.32, 1.45...) and (source image width|height)/(zoomedRect width|height).
Edit:
I have tried to calculate center point when 2 fingers touched and then use that center for future calculations and that's the reason why all the jumping and shaking happen. But maybe there is an error on my side so I'll add some relevant code.
As I mentioned earlier I change zoomRect width, height to scale image
newWidth = tempWidth / scale;
newHeight = tempHeight / scale;
So after zooming I need to also move near center and I do that by calculating how much scale changed
(lastWidth - newWidth)
newSourceX = newSourceX + ((int16_t)lastWidth - newWidth);
newSourceY = newSourceY + ((int16_t)lastHeight - newHeight);
Now we need to stop at calculated center between the 2 fingers and not get out of bounds [0, 0]:
#define LIMIT(value, min, max) (value < min ? min : (value > max ? max : value))
newSourceX = LIMIT(newSourceX + ((int16_t)lastWidth - newWidth), 0, centerSourceX);
newSourceY = LIMIT(newSourceY + ((int16_t)lastHeight - newHeight), 0, centerSourceY);
So far so good but it is not properly centered
Calculate center between two fingers on Display and translate to Source.
Because it's fully zoomed out(Source and Display are identical)
centers are also in identical positions.
Display/Source:
+-------------------------------+
| |
| |
| |
| |
| |
| centerPos |
| * |
| |
| |
+-------------------------------+
So if we zoom in zoomRect will be newWidth / 2 and newHeight / 2 further than needed.
Source:
+-------------------------------+
| |
| |
| |
| |
| |
| centerPos |
| *---------+
| | zoomRect|
| | |
+---------------------+---------+
To account for this I modify code as follows:
newSourceX = LIMIT(newSourceX + ((int16_t)lastWidth - newWidth), 0, centerSourceX - newWidth / 2);
newSourceY = LIMIT(newSourceY + ((int16_t)lastHeight - newHeight), 0, centerSourceY - newHeight / 2);
Success!!! Or not?
Source:
+-------------------------------+
| |
| |
| |
| zoomRect |
| +---------+ |
| |centerPos| |
| | * | |
| +---------+ |
| |
+-------------------------------+
When zooming from 1:1 it works perfectly but when I want to zoom again when
I'm zoomed in a little bit the "jump" happens because:
newSourceX + ((int16_t)lastWidth - newWidth) > centerSourceX - newWidth / 2
Result: http://i.imgur.com/x1t6X2q.gifv
You have source picture, and source coordinates (x,y) are transformed in screen coordinates with affine transformation. I assume that scaling is uniform. At the beginning Scale=1; dx, dy=0
ScrX = (x - dx) * Scale
ScrY = (y - dy) * Scale
So we see piece of source image that was cut from dx,dy point and extended by Scale times.
Example for dx=1,dy=1, Scale=2. Left rect is source, right one is screen.
Let we begin (b prefix) zooming in screen positions of fingers (bx0, by0) and (bx1, by1) - opposite rectangle corners. And end (or intermediate) positions (e prefix)are (ex0, ey0) and (ex1, ey1)
Let diagonals of rectangle correspond to scale degree:
eScale = bScale * Sqrt(((ex1-ex0)^2 + (ey1-ey0)^2) / ((bx1-bx0)^2 + (by1-by0)^2))
//use Math.Hypot or Vector.Length if available
So we have got new scale.
And we have beginning and ending central points
bcx = (bx0 + bx1) / 2
bcy = (by0 + by1) / 2
ecx = (ex0 + ex1) / 2
ecy = (ey0 + ey1) / 2
Both these screen points should correspond to the same source coordinate
bcx = (xx - bdx) * bScale
ecx = (xx - edx) * eScale
excluding xx we get
edx = bdx + bcx / bScale - ecx / eScale
and similar formula for edy
So we have got new shift parameters.
Perhaps lock the centre point of image from the 1st point you can calculate the centre of the two fingers, and then scale from this point. Rather than recompute the centre point each time. Take it as the zoom point and then movement of fingers from this point are the zoom factor, should stop the shaking effect you mentioned

What are the reasons for this benchmark result?

Two functions that convert a rgb image to a gray scale image:
function rgb2gray_loop{T<:FloatingPoint}(A::Array{T,3})
r,c = size(A)
gray = similar(A,r,c)
for i = 1:r
for j = 1:c
#inbounds gray[i,j] = 0.299*A[i,j,1] + 0.587*A[i,j,2] + 0.114 *A[i,j,3]
end
end
return gray
end
And:
function rgb2gray_vec{T<:FloatingPoint}(A::Array{T,3})
gray = similar(A,size(A)[1:2]...)
gray = 0.299*A[:,:,1] + 0.587*A[:,:,2] + 0.114 *A[:,:,3]
return gray
end
The first one is using loops, while the second one uses vectorization.
When benchmarking them (with the Benchmark package) I get the following results for different sized input images (f1 is the loop version, f2 the vectorized version):
A = rand(50,50,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|-------------|----------|--------------|
| 1 | "f1" | 3.23746e-5 | 1.0 | 1000 |
| 2 | "f2" | 0.000160214 | 4.94875 | 1000 |
A = rand(500,500,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|------------|----------|--------------|
| 1 | "f1" | 0.00783007 | 1.0 | 100 |
| 2 | "f2" | 0.0153099 | 1.95527 | 100 |
A = rand(5000,5000,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|----------|----------|--------------|
| 1 | "f1" | 1.60534 | 2.56553 | 10 |
| 2 | "f2" | 0.625734 | 1.0 | 10 |
I expected one function to be faster than the other (maybe f1 because of the inbounds macro).
But I can't explain, why the vectorized version gets faster for larger images.
Why is that?
The answer for the results is that multidimensional arrays in Julia are stored in column-major order. See Julias Memory Order.
Fixed looped version, regarding column-major-order (inner and outer loop variables swapped):
function rgb2gray_loop{T<:FloatingPoint}(A::Array{T,3})
r,c = size(A)
gray = similar(A,r,c)
for j = 1:c
for i = 1:r
#inbounds gray[i,j] = 0.299*A[i,j,1] + 0.587*A[i,j,2] + 0.114 *A[i,j,3]
end
end
return gray
end
New results for A = rand(5000,5000,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|----------|----------|--------------|
| 1 | "f1" | 0.107275 | 1.0 | 10 |
| 2 | "f2" | 0.646872 | 6.03004 | 10 |
And the results for smaller Arrays:
A = rand(500,500,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|------------|----------|--------------|
| 1 | "f1" | 0.00236405 | 1.0 | 100 |
| 2 | "f2" | 0.0207249 | 8.76671 | 100 |
A = rand(50,50,3):
| Row | Function | Average | Relative | Replications |
|-----|----------|-------------|----------|--------------|
| 1 | "f1" | 4.29321e-5 | 1.0 | 1000 |
| 2 | "f2" | 0.000224518 | 5.22961 | 1000 |
Just speculation because I don't know Julia-Lang:
I think the statement gray = ... in the vectorized form creates a new Array where all the calculated values are stored, while the old array is scrapped. In f1 the values are overwritten in place, so no new memory allocation is needed. Memory allocation is quite expensive so the loop-version with in-place overwrites is faster for low numbers.
But memory allocation is usually a static overhead (allocation twice as much doesn't take twice as long) and the vectorized version is computing faster (maybe in parallel ?) so if the numbers get big enough the faster calculation makes more difference than the memory allocation.
I cannot reproduce your results.
See this IJulia notebook: http://nbviewer.ipython.org/urls/gist.githubusercontent.com/anonymous/24c17478ae0f5562c449/raw/8d5d32c13209a6443c6d72b31e2459d70607d21b/rgb2gray.ipynb
The numbers I get are:
In [5]:
#time rgb2gray_loop(rand(50,50,3));
#time rgb2gray_vec(rand(50,50,3));
elapsed time: 7.591e-5 seconds (80344 bytes allocated)
elapsed time: 0.000108785 seconds (241192 bytes allocated)
In [6]:
#time rgb2gray_loop(rand(500,500,3));
#time rgb2gray_vec(rand(500,500,3));
elapsed time: 0.021647914 seconds (8000344 bytes allocated)
elapsed time: 0.012364489 seconds (24001192 bytes allocated)
In [7]:
#time rgb2gray_loop(rand(5000,5000,3));
#time rgb2gray_vec(rand(5000,5000,3));
elapsed time: 0.902367223 seconds (800000440 bytes allocated)
elapsed time: 1.237281103 seconds (2400001592 bytes allocated, 7.61% gc time)
As expected, the looped version is faster for large inputs. Also note how the vectorized version allocated three times as much memory.
I also want to point out that the statement gray = similar(A,size(A)[1:2]...) is redundant and can be omitted.
Without this unnecessary allocation, the results for the largest problem are:
#time rgb2gray_loop(rand(5000,5000,3));
#time rgb2gray_vec(rand(5000,5000,3));
elapsed time: 0.953746863 seconds (800000488 bytes allocated, 3.06% gc time)
elapsed time: 1.203013639 seconds (2200001200 bytes allocated, 7.28% gc time)
So the memory usage went down, but the speed did not noticeably improve.

how to make gluLookAt and glPerspective to work like glortho?

I don't know how to setup distance
where I should stand to look at my 2d stuff(which at center there is a ball pos:1024/2,768/2)
I use gluLookAt and glPerspective to give my 2d rotated object more 3d feel
anyway here is the code I use with glOrtho:
glMatrixMode ( GL_PROJECTION );
glLoadIdentity();
glOrthof ( 0, 1024, 768, 0, 0, 1000.0f );
glMatrixMode ( GL_MODELVIEW );
glLoadIdentity();
and this is when I try to setup with glPerspective and gluLookAt:
glMatrixMode ( GL_PROJECTION );
glLoadIdentity();
gluPerspective(90,1024/768,0,300);
gluLookAt(1024 * 0.5,768 * 0.5f,-????, 1024 * 0.5,768 * 0.5,0, 0,-1,0);
glMatrixMode ( GL_MODELVIEW );
glLoadIdentity();
Basically I just want those codes that works the same,I am not sure how to setup the fovy value of gluPerspective,and the ??? from gluLookAt,how to project the full size with width 1024,and height 768?
Well glOrtho is supposed to yield a parallel projection, so essentially using gluPerspective is going exactly the other way. If you're hoping to find a special case of gluPerspective that acts like glOrtho, the problem is that the matrices they generate are different in some ways you can't reach - note the bottom right corner, in particular, in what they generate:
glOrtho
| 2 |
|---------- 0 0 t |
|right-left x |
| |
| 2 |
| 0 ---------- 0 t |
| top-bottom y |
| |
| |
| 0 0 -2 |
| -------- t |
| far-near z |
| |
| 0 0 0 1 |
gluPerspective
| f |
| ------ 0 0 0 |
| aspect |
| |
| 0 f 0 0 |
| |
| zFar+zNear 2*zFar*zNear |
| 0 0 ---------- ------------ |
| zNear-zFar zNear-zFar |
| |
| 0 0 -1 0 |
So it's going to be hard to set the 2/(top-bottom) and the bottom row correctly, for starters.
If this line is the core of your issue:
gluLookAt(1024 * 0.5,768 * 0.5f,-????, 1024 * 0.5,768 * 0.5,0, 0,-1,0);
...then just set thee -???? to a positive value indicating the distance to your eye from the center of the scene (OpenGL's positive Z points towards the viewer).

Weighted random integers

I want to assign weightings to a randomly generated number, with the weightings represented below.
0 | 1 | 2 | 3 | 4 | 5 | 6
─────────────────────────────────────────
X | X | X | X | X | X | X
X | X | X | X | X | X |
X | X | X | X | X | |
X | X | X | X | | |
X | X | X | | | |
X | X | | | | |
X | | | | | |
What's the most efficient way to do it?
#Kerrek's answer is good.
But if the histogram of weights is not all small integers, you need something more powerful:
Divide [0..1] into intervals sized with the weights. Here you need segments with relative size ratios 7:6:5:4:3:2:1. So the size of one interval unit is 1/(7+6+5+4+3+2+1)=1/28, and the sizes of the intervals are 7/28, 6/28, ... 1/28.
These comprise a probability distribution because they sum to 1.
Now find the cumulative distribution:
P x
7/28 => 0
13/28 => 1
18/28 => 2
22/28 => 3
25/28 => 4
27/28 => 5
28/28 => 6
Now generate a random r number in [0..1] and look it up in this table by finding the smallest x such that r <= P(x). This is the random value you want.
The table lookup can be done with binary search, which is a good idea when the histogram has many bins.
Note you are effectively constructing the inverse cumulative density function, so this is sometimes called the method of inverse transforms.
If your array is small, just pick a uniform random index into the following array:
int a[] = {0,0,0,0,0,0,0, 1,1,1,1,1,1, 2,2,2,2,2, 3,3,3,3, 4,4,4, 5,5, 6};
If you want to generate the distribution at runtime, use std::discrete_distribution.
To get the distribution you want, first you basically add up the count of X's you wrote in there. You can do it like this (my C is super rusty, so treat this as pseudocode)
int num_cols = 7; // for your example
int max;
if (num_cols % 2 == 0) // even
{
max = (num_cols+1) * (num_cols/2);
}
else // odd
{
max = (num_cols+1) * (num_cols/2) + ((num_cols+1)/2);
}
Then you need to randomly select an integer between 1 and max inclusive.
So if your random integer is r the last step is to find which column holds the r'th X. Something like this should work:
for(int i=0;i<num_cols;i++)
{
r -= (num_cols-i);
if (r < 1) return i;
}

Resources