OK so I don't even know how to research my problem because I don't know any terms to describe it. So here it is:
I am using an IMU to measure the lying angle and bed inclination for bedbound patients. Is there a way for me to show these results graphically? I mean, is it possible to have a dummy on the screen in a way that I can control his lying angle using C?
I would greatly appreciate it if you let me know if you have any knowledge about possible ways to do it.
Related
Is there a way to change the clutter perspective for a given container or widget?
The clutter perspective controls how all the clutter actors on the screen are displayed when rotated, translated, scaled, etc.
What I would really like to do is to change the perspective's origin from the center of the screen to another coordinate.
I have messed with a few of the stage methods. However, I haven't had much luck understanding some of the results, and often I hit some stability issues.
I know there are transformation matrices that do all the logic under the hood, and there are documented ways to change the transform matrices. Honestly, I haven't researched much further and just though I would ask for guidance before spending a lot of time on it.
Which leads me to another question regarding the matrices and transformations. Can one of these matrices be used to skew an actor? Or deform it into a trapezoid, etc? And any idea how to get started on that, ie. what a skew matrix would look like?
Finally, does anyone know why the clip path was deprecated? It seems that would have worked for what I ultimately want to do: draw irregular shaped 2d objects on the screen If I can implement an answer to question 2, then I guess a clip box with a transformation can be used here.
1, I do not know if (or how) one might change the Clutter stage's focal point.
2 A skew or shear transformation matrix is easy enough to construct, and can be implemented in the GJS Clutter functions Clutter.Actor.set_transform(T) and Clutter.Actor.set_child_transform(T) where T is a Clutter.Matrix .
This does present another problem, however, for the current codebase; and this leads to another question. (I guess I should post it somewhere else). But, when a transform is set on a clutter actor (or its children), the rest of the actor's properties are ignored. This has the added effect that the Tweener library cannot be used for animation of these properties.
3 Finally, one can use Cairo to draw irregular shaped objects and paths on a Clutter actor, however, the reactive area for the actor (ie. mouse-enter and -leave events) will still be for the entire actor, not defined by the Cairo path.
I am new to WPF and c# as a whole. I have experience from programming language like PHP, HTML and Javascript so I was able to cope quickly.
I have a project that is used to print PINs and Serial Number on a card. Let say the card is A4 and on the paper there are 4 rows and 3 columns of printed cards.
My problem is, I dont really know how to generate the dynamic document and the approach to use. Since the content is not that I have to just make it available on the paper before hand, it has to be placed on a particular position on the paper. Inches calculation will strictly be adhered to.
The link below illustrates an explanation of what I am talking about with border exclusive. All I want to generate is the PIN and serial in this way on a specific paper size.
http://ecloudpack.com/grid.png
Dont get me wrong, speaking of flowDocuments, I think I can bring out something but I am faced with the challenge of precision of position on the paper and making sure the pagination is correct and making sure the margin as specified is what is generated.
I have a Monday deadline and I have been trying.
Is there anyone that can help.
I have an image. I have to find the height of a particular object in it.
If we directly take the pixel length it will not give the exact height.How to approach this problem?
After you calibrated your camera, you will have a transformation from image plane to world coordinates. Using this information you can predict the height of the object you are looking for, of course in this step you somehow need to identify the object that you are interested in.
Generally speaking, this question is too broad and covers many fundamental concepts of computer vision, so please consult your favorite textbook before attacking the problem.
An alternative approach: place the object on top of a A4 paper sheet and take a picture from above. Since you know the size of paper, you can calculate the size of the object based on that.
To detect a paper sheet, check this post or this.
I'm using GeoDjango with PostGIS and trying to use a polygon to get records from a database which fall inside it.
If I define a polygon which is bigger than half the area of the earth it assumes the 'inside' of my polygon is the smaller area which I intended as the 'outside' and returns only results which are outside it.
I can just use this smaller, wrong area to exclude results. Polygon.area seems to know what I intend so I can use this to determine when to make my search inclusive or exclusive. I feel like this problem is probably common, is there a better way to solve it?
Update: If 180 degrees longitude is inside my polygon this doesn't work at all. It seems GEOS is to blame this time. This image shows what I believe is the reason. Green is the polygon I define, Red is how it seems to be interpreting it. Again this seems like a problem which would crop up often and one that libraries like GEOS are made to deal with. Is there a way?
Alright, no answers. Here's what I've done.
Because GEOS doesn't like things crossing the 180th meridian:
First check if the polygon crosses the 180th meridian - If so, break it into 2 polygons along that line.
Because PostGIS assumes a polygon is as small as possible you can't make one cover more than half the world, so:
Check if the polygon or each of the split polygons covers half the world or more - If so, break them in half.
Construct a MultiPolygon from the results.
I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.