I'm performing some geographical computations in a grid with squares (i.e. regions). I'm using Delphi, but the logic could probably be applied to C++ too. Let me first explain what I want to do.
The following image is a portion of my grid, which is represented by a two-dimensional array Square that denotes the centre point in each square, and the "movement through the layers":
The green square has an X and Y coordinate of 2, so that is Square[2,2]. The actual coordinates are stored in Square[2,2].Latitude and Square[2,2].Longitude as wel as extra information in e.g. Square[2,2].Info that I use for computations.
Now comes the purpose: I need to do some computations on the surrounding areas. How many of the surrounding areas can be called "neighbours", depends on how many "layers" I have defined. In the image above, I used two of these "layers". That means that when starting from the green cell, I go around it once (blue arrows) and then again in the second layer (red arrows).
Now comes the problem: if I would have started in Square[1,1] (green square) instead of Square[2,2] as in the image below, the second layer (in red) would try to access data on the left side and at the bottom that does not exist (i.e. in the "-1" column and row). See the image below. This problem occurs at all borders of course.
I probably can make exceptions with IF-statements for every scenario, but I was wondering if there are common programming "tricks" that can handle such situations where you try to access data does not exist.
For example, I imagine it would be very handy if I can follow the pattern of the arrows depicted in the first image to access all the neighbouring squares every single time, even if there are non-existing squares. So, looking at the first image, after Square[3,0] you'd go to something like Square[3,-1] etc. and then eventually come back into the "feasible" zone in Square[0,3].
To visit neighborhood, you can use some kind of BFS (breadth-first search).
But for sparse structure (like the last picture shows) it is worth to use some data structure to organize cells in a good way. Perhaps kd-tree is suitable - you add all existing cells in the tree and make range search around given cell to get other cells in its vicinity.
Also look at another spatial data structures (see list at the bottom of kd-tree page).
Related
I've reedited this question a few times: I've made some good progress!
So, as I understand it, multiplot splits the whole canvas up into equal sized parts as needed. This is a little weird when your different plots have different dimensions, as in my case, but it works. The problem might come in when the graph are supposed to be very close together (e.g. each takes up most of its canvas), but one of them has labels. In that case, it seems the plot with labels must resize to be smaller so everything can fit. That's where I am now.
I see a few options.
make all the plots farther apart-- but I don't want to do that.
somehow make the label not part of the multiplot-- I would totally do this, but I don't know how. It's possible even just the axis tics themselves would be too big, but I can probably deal with that or compromise just that amount on the spacing.
So my question is, how can I put words in a gnuplot graph, completely separately from a plot?
(The picture is also giant, which is unfortunate, it was the only way I could make the formatting work)
Two things:
Multiplot has a convenience mode layout <rows>, <columns> that, as you say, splits the page into equal rectangles. But you do not have to use this convenience mode; you can assign each sub-plot to any arbitrary rectangle on the page, even one that overlaps or is interior to another rectangle. Here is an example from the online demo set that is close to what you show:
Demo of multiple plots with explicit alignment of borders
Placing text anywhere on the page: The set label command allows you to position the text using screen coordinates rather than plot coordinates. For example, to place a single large label centered at the top of a page that contains multiple plots:
set label 1 "This label is positioned independent of all plots"
set label 1 at screen 0.5, screen 0.95 center
set label 1 font "Times,20"
I have a set of pages that look like this:
I have the content in grids with * Heights and Widths so the grid correctly scales when the entire window resizes. I would like the text to resize with the grid. Basically I would like the user to resize from this:
To this:
(preserving white space)
One way to do this would be to wrap the TextBlock in a ViewBox with margins on the right and bottom (for Grid.Row="3") to account for white space. But because I have several pages with different lengths and line counts I would have to set the margin specifically for each page otherwise the text sizes would differ on each page. Is there a better way to do this??
I don't think there is a better way to do this. There are different ways. But, I think it isn't just a matter of opinion that they would not be better.
Ways I can think of.
Render your text offscreen, rendertargetbitmap that so you've got a picture. Change your textblocks on screen to images and stretch them.
Or
Work out the size your text wants to be. Then do some calculation comes up with a different fontsize which is "better". This is a lot easier to write a description of than do.
In my opinion.
A viewbox is easier to implement. Way less error prone than calculations. Will give at least as good results as rendering to a picture.
I just want to add one more solution to the ones suggested by Andy, which is more of a scientific approach and takes a bit of practice to master.
Suppose you have to find a function F, which maps one or more variables to a desired single value. In your case that would be a function F, which takes aspect ratio of the window as input and outputs an appropriate font size.
How can you find such a function?
Well... you don't need to do any math yourself!
First, you need some data to begin with:
1. Resize the window randomly
2. Calculate aspect ration (X)
3. Pick an appropriate font size that looks good enough (Y)
4. Repeat the measurement 7 to 10 times (sorry data scientists)
5. Enter the data in Excel - one column for X and another one for Y
6. Insert a scatter chart
7. Choose the best trendline for your data, but avoid the polynomial one
8. Display the trendline equation and use the expression in your code
Now I should mention the pros and cons of this regression technique.
Pros:
1. It can solve a wide range of tricky problems:
"I use this 3rd party control, but when the text is too long it overlaps the title bar. How to trim it so it doesn't go beyond the top border?. Deadline is coming!"
2. Even if it doesn't solve the problem perfectly, the results are often acceptable
3. It takes minutes to try out unlike spending a day refreshing your math skills
Cons:
1. The biggest problem is that to keep it simple, you often lower the number of
variables by assuming some of them to be constant. In this post I've assumed that
the font family won't change for example, neither the font weight.
2. If any of the assumptions does not hold the final result could be even worse
This technique is fragile, but powerful. Use it as your last weapon and never leave magic expression like
fontSize = (int)(0.76 + 1.2 * aspectRation) without documenting how it came to be.
So, to start off, I'm not very good at computer graphics. I'm trying to implement a GUI toolkit where one of the features is being able to apply 3D transformations to 2D "layers". (a layer only has one Z coordinate, as pre-transform, it's a two dimensional axis aligned rectangle)
Now, this is pretty straightforward, until you come to 3D transformations that would push the layer back, requiring splitting the layer into several polygons in order to render it correctly, as illustrated here. And because we can have transparency, layers may not get completely occluded, while still requiring getting split.
So here is an illustration depicting the issue and the desired outcome. In this scenario, the blue layer (call it B) is on top of the red layer (R), while having the same Z position (but B was added after R). In this scenario, if we rotate B, its top two points will get a Z index lower than 0 while the bottom points will get an index higher than 0 (with the anchor point being the only point/line left as 0).
Can somebody suggest a good way of doing this on the CPU? I've struggled to find a suitable algorithm implementation (in C++ or C) that would be appropriate to this scenario.
Edit: To clarify myself, at this stage in the pipeline, there is no rendering yet. We just need to produce a set of polygons for each layer that would then represent the layer's transformed and occluded geometry. Then, if required, rendering (either software or hardware) is done if required, which is not always the case (for example, when doing hit testing).
Edit 2: I looked at binary space partitioning as an option of achieving this but I have only been able to find one implementation (in GL2PS), which I'm not sure how to use. I do have a vague understanding of how BSPs work, but I'm not sure how they can be used for occlusion culling.
Edit 3: I'm not trying to do colour and transparency blending at this stage. Just pure geometry. Transparency can be handled by the renderer, and overdraw is okay. In this case, the blue polygon can just be drawn under the red one, but with more complicated cases, depth sorting or even splitting up the polygons may be required (example of a scary case like that below). Although the viewport is fixed, because all layers can be transformed in 3D, creating a shape shown below is possible.
So what I'm really looking for is an algorithm that would geometrically split layer B into two blue shapes, one of which would be drawn "above" and one of which would be drawn below R. The part "below" would get overdraw, yes, but it's not a major issue. So B just need to be split into two polygons so it would appear to cut through R when those polygons are drawn in order. No need to worry about blending.
Edit 4: For the purpose of this, we cannot render anything at all. This all has to be done purely geometrically (producing 2D polygons). This is what I was originally getting at.
Edit 5: I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100. Unless the layers are 3D transformed (which is where this problem arises), they are just radix sorted by Z positions before being drawn. Layers with the same Z position are drawn in order in which they were added (first in, first out).
Sorry if I didn't make it clear in the original question.
If you "aren't good with computer graphics", Doing it on CPU (software rendering) will be extremely difficult for you, if polygons can be transparent.
The easiest way to do it is to use GPU rendering (OpenGL/Direct3D) with Depth Peeling technique.
Cpu solutions:
Soltuion #1 (extremely difficult):
(I forgot the name of this algorithm).
You need to split polygon B into two, - for example, using polygon A as clip plane, then render result using painter's algorithm.
To do that you'll need to change your rendering routines so they'll no longer use quads, but textured polygons, plus you'll have to write/debug clipping routines that'll split triangles present in scene in such way that they'll no longer break paitner's algorithm.
Big Problem: If you have many polygons, this solution can potentially split scene into infinite number of triangles. Also, writing texture rendering code yourself isn't much fun, so it is advised to use OpenGL/Direct3D.
This can be extremely difficult to get right. I think this method was discussed in "Computer Graphics Using OpenGL 2nd edition" by "Francis S. Hill" - somewhere in one of their excercises.
Also check wikipedia article on Hidden Surface Removal.
Solution #2 (simpler):
You need to implement multi-layered z-buffer that stores up to N transparent pixels and their depth.
Solution #3 (computationally expensive):
Just use ray-tracing. You'll get perfect rendering result (no limitations of depth peeling and cpu solution #2), but it'll be computationally expensive, so you'll need to optimize rendering routines a lot.
Bottom line:
If you're performing software rendering, use Solution #2 or #3. If you're rendering on hardware, use technique similar to depth-peeling, or implement raytracing on hardware.
--edit-1--
required knowledge for implementing #1 and #2 is "line-plane intersection". If you understand how to split line (in 3d space) into two using a plane, you can implement raytracing or clipping easily.
Required knowledge for #2 is "textured 3d triangle rendering" (algorithm). It is a fairly complex topic.
In order to implement GPU solution, you need to be able to find few OpenGL tutorials that deal with shaders.
--edit-2--
Transparency is relevant, because in order to get transparency right, you need to draw polygons from back to front (from farthest to closest) using painter's algorithms. Sorting polygons properly is impossible in certain situation, so they must be split, or you should use one of the listed techniques, otherwise in certain situations there will be artifacts/incorrectly rendered images.
If there's no transparency, you can implement standard zbuffer or draw using hardware OpenGL, which is a very trivial task.
--edit-3--
I should note that the overall number of quads per subscene is around 30 (average). Definitely won't go above 100.
If you will split polygons, it can easily go way above 100.
It might be possible to position polygons in such way that each polygon will split all others polygon.
Now, 2^29 is 536870912, however, it is not possible to split one surface with a plane in such way that during each split number of polygons would double. If one polygon is split 29 timse, you'll get 30 polygons in the best-case scenario, and probably several thousands in the worst case if splitting planes aren't parallel.
Here's rough algorithm outline that should work:
Prepare list of all triangles in scene.
Remove back-facing triangles.
Find all triangles that intersect each other in 3d space, and split them using line of intersection.
compute screen-space coordinates for all vertices of all triangles.
Sort by depth for painter's algorithm.
Prepare extra list for new primitives.
Find triangles that overlap in 2D (post projection) screen space.
For all overlapping triangles check their rendering order. Basically a triangle that is going to be rendered "below" another triangles should have no part that is above another triangle.
8.1. To do that, use camera origin point and triangle edges to split original triangles into several sub-regions, then check if regions conform to established sort order (prepared for painter's algorithm). Regions are created by splitting existing pair of triangles using 6 clip planes created by camera origin points and triangle edges.
8.2. If all regions conform to rendering order, leave triangles be. If they don't, remove triangles from list, and add them to the "new primitives" list.
IF there are any primitives in new primitives list, merge the list with triangle list, and go to #5.
By looking at that algorithm, you can easily understand why everybody uses Z-buffer nowadays.
Come to think about it, that's a good training exercise for universities that specialize in CG. The kind of exercise that might make your students hate you.
I am going to come out say give the simpler solution, which may not fit your problem. Why not just change your artwork to prevent this problem from occuring.
In problem 1, just divide the polys in Maya or whatever beforehand. For the 3 lines problem, again, divide your polys at the intersections to prevent fighting. Pre-computed solutions will always run faster than on the fly ones - especially on limited hardware. From profesional experience, I can say that it also does scale, well it scales ok. It just requires some tweaking from the art side and technical reviews to make sure nothing is created "ilegally." You may end up getting more polys doing it this way than rendering on the fly, but at least you won't have to do a ton of math on CPUs that may not be up to the task.
If you do not have control over the artwork pipeline, this won't work as writing some sort of a converter would take longer than getting a BSP sub-division routine up and running. Still, KISS is often the best solution.
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Even if is something written inside the paper. I need to detect the paper and is corner, because what i really want is to draw a opengl polygon over the paper in each corner of the paper will be a corner of the polygon. Then i need the coordinates of the paper to do other stuffs.
So i need to:
- detect a square white blob.
- get the coordinates of the cornes
- draw a polygon over the white sheet.
Any ideias how can i do that?
Much depends on context. For example, suppose that you:
know that the paper is always roughly centered (i.e. W/2, Y/2 is always inside the blob), and no more rotated than 45 degrees (30 would be better)
have a suitable border around the sheet so that the corners never touch the edges of the FOV
are able (through analysis of local variance, or if you're lucky, check of background color or luminance) to say whether a point is inside or outside the blob
the inside/outside function never fails (except possibly in the close vicinity of a border)
then you could walk a line from a point on the border (surely outside) and the center (surely inside), even through bisection, and find a point - an areal - on the edge.
Two edge points give a rect (two areals give a beam), two rects give an intersection (two beams give a larger areal) - and there's your corner. You should carry along the detection uncertainty (areal radius) in order to validate corners (another less elegant approach is to roughly calculate where the corner is, and pinpoint it with a spiral search or drunkard's walk).
This algorithm is amenable to parallelization and, as long as the hypotheses hold, should be really fast.
All that said, it remains a hack -- I agree with unwind, why reinvent the wheel? If you have memory or CPU constraints (embedded systems, etc.), I believe there ought to be OpenCV and e-Vision "lite" ports also for ARM and embedded platforms.
(Sorry for my terminology - I'm monkey-translating from Italian. "Areal" is likely to correspond to your "blob", a beam is the family of lines joining all couples of points in two different blobs, line intensity being the product of distance from a point from its areal's center)
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Your first shot could be a simple flood-fill. That is, select a good threshold to binarize the image and apply the algorithm. The threshold can be fixed if you know the paper is always brighter than X and the background is always darker than this. Or this can be an adaptive threshold, for example Otsu's method. OpenCV offers this for free.
If you'd need to speed it up you could use a union-find data structure.
Finally you'd need to come up with some heuristic how to identify the corners (e.g. the four extreme values in x/y direction).
Then i need [...] the coordinates of the cornes [...]
Then you don't need blob detection, but corner detection or contour detection in the first place. OpenCV has some nice functionality for exactly this.
If you can't use it, I would suggest to binarize the image as above and use a harris-detector to find the corners of the object.
OpenCV's TBB support could also come quite handy if you'd use it and you have problems to meet your real-time requirements.
How can I wrap shapes around the world, so that a shape is shown more than once at low zoom?
Example:
I draw a polygon over USA.
I zoom out so that I can see two USA's.
I only see one polygon: ( I want to see two!
The map data effectively has 2 USAs. That implies you should actually want 2 polygons, one of which will be hidden most of the time.
Might as well cater for the worst case and treat a single USA as the exception rather than the rule.
You can't.
As others have already pointed out, the fact that, at far zoom levels certain features get repeated on either side of the map is an unwanted but inevitable side-effect of a projected surface that enables continuous scrolling. This has only been an issue in recent versions of the Bing Maps control - the earlier v6.x control prevented the map from panning across the 180th meridian.
I cannot think of any possible reason why you'd ever want to show two USAs, let alone target data to be positioned on each one. So the solution is to modify either the zoom level at which the map is displayed, or the size of the application window in which it is being displayed so that this situation doesn't occur.