Generating Print ready contents with WPF with paper precision - wpf

I am new to WPF and c# as a whole. I have experience from programming language like PHP, HTML and Javascript so I was able to cope quickly.
I have a project that is used to print PINs and Serial Number on a card. Let say the card is A4 and on the paper there are 4 rows and 3 columns of printed cards.
My problem is, I dont really know how to generate the dynamic document and the approach to use. Since the content is not that I have to just make it available on the paper before hand, it has to be placed on a particular position on the paper. Inches calculation will strictly be adhered to.
The link below illustrates an explanation of what I am talking about with border exclusive. All I want to generate is the PIN and serial in this way on a specific paper size.
http://ecloudpack.com/grid.png
Dont get me wrong, speaking of flowDocuments, I think I can bring out something but I am faced with the challenge of precision of position on the paper and making sure the pagination is correct and making sure the margin as specified is what is generated.
I have a Monday deadline and I have been trying.
Is there anyone that can help.

Related

Read bytes of interlaced PNG in C

Ive been trying to make my own PNG reader in C and ive gotten to the point where it works normally but I want to try and expand its support. I noticed that while uncommon, interlaced png's do exist. Now, I want the goal of my reader simply to return an array of rgba values cooresponding to each pixel in top left to bottom right order. I would not be creating or displaying any pngs.
based on what i saw on the wikipedia for Adam7 Interlacing (the type pngs use) i would most have to add support for it by organizing the final array according to Adam7. However, when I reviewed the Interlacing and Progressive Display section of libpng (one of the sources i was using), it stated:
Note that, although I've described the method in terms of 8 × 8 tiles, pixels for any given pass are stored as complete rows, not as tiled groups. For example, the fifth pass consists of every other pixel in the entire third row of the image, followed by every other pixel in the seventh row, and so on.
This makes it seem as though the interlacing is meat for the program in which displays it to use, rather than the information itself being stored using the Adam7 Algorithm.
My question now is, If im not displaying the png, does interlacing matter? And if so, could someone provide an example of how to uninterlace the information? Because this aspect still confuses me.

Clutter: Perspective, Skew, and Matrices

Is there a way to change the clutter perspective for a given container or widget?
The clutter perspective controls how all the clutter actors on the screen are displayed when rotated, translated, scaled, etc.
What I would really like to do is to change the perspective's origin from the center of the screen to another coordinate.
I have messed with a few of the stage methods. However, I haven't had much luck understanding some of the results, and often I hit some stability issues.
I know there are transformation matrices that do all the logic under the hood, and there are documented ways to change the transform matrices. Honestly, I haven't researched much further and just though I would ask for guidance before spending a lot of time on it.
Which leads me to another question regarding the matrices and transformations. Can one of these matrices be used to skew an actor? Or deform it into a trapezoid, etc? And any idea how to get started on that, ie. what a skew matrix would look like?
Finally, does anyone know why the clip path was deprecated? It seems that would have worked for what I ultimately want to do: draw irregular shaped 2d objects on the screen If I can implement an answer to question 2, then I guess a clip box with a transformation can be used here.
1, I do not know if (or how) one might change the Clutter stage's focal point.
2 A skew or shear transformation matrix is easy enough to construct, and can be implemented in the GJS Clutter functions Clutter.Actor.set_transform(T) and Clutter.Actor.set_child_transform(T) where T is a Clutter.Matrix .
This does present another problem, however, for the current codebase; and this leads to another question. (I guess I should post it somewhere else). But, when a transform is set on a clutter actor (or its children), the rest of the actor's properties are ignored. This has the added effect that the Tweener library cannot be used for animation of these properties.
3 Finally, one can use Cairo to draw irregular shaped objects and paths on a Clutter actor, however, the reactive area for the actor (ie. mouse-enter and -leave events) will still be for the entire actor, not defined by the Cairo path.

WPF dynamically scale TextBlock Text without filling a container

I have a set of pages that look like this:
I have the content in grids with * Heights and Widths so the grid correctly scales when the entire window resizes. I would like the text to resize with the grid. Basically I would like the user to resize from this:
To this:
(preserving white space)
One way to do this would be to wrap the TextBlock in a ViewBox with margins on the right and bottom (for Grid.Row="3") to account for white space. But because I have several pages with different lengths and line counts I would have to set the margin specifically for each page otherwise the text sizes would differ on each page. Is there a better way to do this??
I don't think there is a better way to do this. There are different ways. But, I think it isn't just a matter of opinion that they would not be better.
Ways I can think of.
Render your text offscreen, rendertargetbitmap that so you've got a picture. Change your textblocks on screen to images and stretch them.
Or
Work out the size your text wants to be. Then do some calculation comes up with a different fontsize which is "better". This is a lot easier to write a description of than do.
In my opinion.
A viewbox is easier to implement. Way less error prone than calculations. Will give at least as good results as rendering to a picture.
I just want to add one more solution to the ones suggested by Andy, which is more of a scientific approach and takes a bit of practice to master.
Suppose you have to find a function F, which maps one or more variables to a desired single value. In your case that would be a function F, which takes aspect ratio of the window as input and outputs an appropriate font size.
How can you find such a function?
Well... you don't need to do any math yourself!
First, you need some data to begin with:
1. Resize the window randomly
2. Calculate aspect ration (X)
3. Pick an appropriate font size that looks good enough (Y)
4. Repeat the measurement 7 to 10 times (sorry data scientists)
5. Enter the data in Excel - one column for X and another one for Y
6. Insert a scatter chart
7. Choose the best trendline for your data, but avoid the polynomial one
8. Display the trendline equation and use the expression in your code
Now I should mention the pros and cons of this regression technique.
Pros:
1. It can solve a wide range of tricky problems:
"I use this 3rd party control, but when the text is too long it overlaps the title bar. How to trim it so it doesn't go beyond the top border?. Deadline is coming!"
2. Even if it doesn't solve the problem perfectly, the results are often acceptable
3. It takes minutes to try out unlike spending a day refreshing your math skills
Cons:
1. The biggest problem is that to keep it simple, you often lower the number of
variables by assuming some of them to be constant. In this post I've assumed that
the font family won't change for example, neither the font weight.
2. If any of the assumptions does not hold the final result could be even worse
This technique is fragile, but powerful. Use it as your last weapon and never leave magic expression like
fontSize = (int)(0.76 + 1.2 * aspectRation) without documenting how it came to be.

Find a height of object using OpenCV

I have an image. I have to find the height of a particular object in it.
If we directly take the pixel length it will not give the exact height.How to approach this problem?
After you calibrated your camera, you will have a transformation from image plane to world coordinates. Using this information you can predict the height of the object you are looking for, of course in this step you somehow need to identify the object that you are interested in.
Generally speaking, this question is too broad and covers many fundamental concepts of computer vision, so please consult your favorite textbook before attacking the problem.
An alternative approach: place the object on top of a A4 paper sheet and take a picture from above. Since you know the size of paper, you can calculate the size of the object based on that.
To detect a paper sheet, check this post or this.

Image processing..back ground subtraction

I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.

Resources