Issue with FBX ColorIndex values - export

I saved some FBX files (version 6.1, in ASCII format) in Blender, carrying Vertex Paint information.
I imported the FBX file into Cienam4D and it opened fine, ignoring all the Vertex Paint information, of course.
The number of vertices, correctly reported by Cinema4D is 984.
Opening the FBX in a text editor, I confirmed that the number of vertices is 984. I copied and pasted the vertices coordinates into a new document and searched for a comma. It reported that there were 2951. Since the last coordinate triple doesn't include a comma, this means that there are 2952 coordinates in the list. Since each 3D point is made of three coordinates, the total number of vertices should be 2952/3 and that equals to 984. So, it confirms the number of vertices reported by Cinema4D.
Then I did the same for the ColorIndex list and it reported that it found 3775 commas. So, 3776 index values.
3776?!?!? Shouldn't it be the same as the number of vertices? I mean, shouldn't it have 984 index values?
With this question in mind, I checked the number of polygons. Cinema4D reports 944 polygons.
Dividing 3776 by 944 returns 4. Well, so I assumed that the ColorIndex list should be reporting something like:
#1 - color index of the first point of polygon #1
#2 - color index of the second point of polygon #1
#3 - color index of the third point of polygon #1
#4 - color index of the forth point of polygon #1
#5 - color index of the first point of polygon #2
#6 - color index of the second point of polygon #2
#7 - color index of the third point of polygon #2
#8 - color index of the forth point of polygon #2
#9 - color index of the first point of polygon #3
#10 - color index of the second point of polygon #3
...
Am I correct in assuming this?
I also noticed that the PolygonVertexIndex list contains 1888 values (I counted the minus characters).
1888 / 2 = 944
This means that all faces are defined twice? Why?

Blender writes a color for the corner of each face, using MappingInformationType: "ByPolygonVertex" referencing ReferenceInformationType: "IndexToDirect"
The confusion here is likely that different applications may write different kinds of array mappings, neither is wrong, but it can be confusing.

Related

How to split a map in catchment areas (polygons gathering the closest points of specific points)

I want to create an algorithm to determine the catchment areas of recycling bins in a city.
The idea : I have several points on a map, and I want to trace the polygons of their catchment areas. I consider that the catchment area is the zone where this recycling bin is the closest one.
I found that the edges of these polygons are parts of the line segment bisectors between 2 recycling bins.
But I haven't found yet how to select mathematically which intersections of line segment bisectors are the vertexes of the polygons of catchment areas.
(all the intersections of line segment bisectors aren't interesting)
Here is a picture of what I want to do (crosses are recycling bins and lines are the edges that demarcate catchment areas).
Any idea ?
I found the answer : with the Voronoi diagram (https://en.wikipedia.org/wiki/Voronoi_diagram)

Connect points to plane/Draw Polygon

I'm currently working on a project where I want to draw different mathematical objects onto a 3D cube. It works as it should for Points and Lines given as a vector equation. Now I have a plane given as a parametric equation. This plane can be somewhere in the 3D space and may be visible on the screen, which is this 3D cube. The cube acts as an AABB.
First thing I needed to know was whether the plane intersects with the cube. To do this I made lines who are identical to the edges of this cube and then doing 12 line/plane intersections, calculating whether the line is hit inside the line segment(edge) which is part of the AABB. Doing this I will get a set of Points defining the visible part of the plane in the cube which I have to draw.
I now have up to 6 points A, B, C, D, E and F defining the polygon ABCDEF I would like to draw. To do this I want to split the polygon into triangles for example: ABC, ACD, ADE, AED. I would draw this triangles like described here. The problem I am currently facing is, that I (believe I) need to order the points to get correct triangles and then a correctly drawn polygon. I found out about convex hulls and found QuickHull which works in three dimensional space. There is just one problem with this algorithm: At the beginning I need to create a three dimensional simplex to have a starting point for the algorithm. But as all my points are in the same plane they simply form a two dimensional plane. Thus I think this algorithm won't work.
My question is now: How do I order these 3D points resulting in a polygon that should be a 2D convex hull of these points? And if this is a limitation: I need to do this in C.
Thanks for your help!
One approach is to express the coordinates of the intersection points in the space of the plane, which is 2D, instead of the global 3D space. Depending on how exactly you computed these points, you may already have these (say (U, V)) coordinates. If not, compute two orthonormal vectors that belong to the plane and take the dot products with the (X, Y, Z) intersections. Then you can find the convex hull in 2D.
The 8 corners of the cube can be on either side of the plane, and have a + or - sign when the coordinates are plugged in the implicit equation of the plane (actually the W coordinate of the vertices). This forms a maximum of 2^8=256 configurations (of which not all are possible).
For efficiency, you can solve all these configurations once for all, and for every case list the intersections that form the polygon in the correct order. Then for a given case, compute the 8 sign bits, pack them in a byte and lookup the table of polygons.
Update: direct face construction.
Alternatively, you can proceed by tracking the intersection points from edge to edge.
Start from an edge of the cube known to traverse the plane. This edge belongs to two faces. Choose one arbitrarily. Then the plane cuts this face in a triangle and a pentagon, or two quadrilaterals. Go to the other the intersection with an edge of the face. Take the other face bordered by this new edge. This face is cut in a triangle and a pentagon...
Continuing this process, you will traverse a set of faces and corresponding segments that define the section polygon.
In the figure, you start from the intersection on edge HD, belonging to face DCGH. Then move to the edge GC, also in face CGFB. From there, move to edge FG, also in face EFGH. Move to edge EH, also in face ADHE. And you are back on edge HD.
Complete discussion must take into account the case of the plane through one or more vertices of the cube. (But you can cheat by slightly translating the plane, constructing the intersection polygon and removing the tiny edges that may have been artificially created this way.)

Antipole Clustering

I made a photo mosaic script (PHP). This script has one picture and changes it to a photo buildup of little pictures. From a distance it looks like the real picture, when you move closer you see it are all little pictures. I take a square of a fixed number of pixels and determine the average color of that square. Then I compare this with my database which contains the average color of a couple thousand of pictures. I determine the color distance with all available images. But to run this script fully it takes a couple of minutes.
The bottleneck is matching the best picture with a part of the main picture. I have been searching online how to reduce this and came a cross “Antipole Clustering.” Of course I tried to find some information on how to use this method myself but I can’t seem to figure out what to do.
There are two steps. 1. Database acquisition and 2. Photomosaic creation.
Let’s start with step one, when this is all clear. Maybe I understand step 2 myself.
Step 1:
partition each image of the database into 9 equal rectangles arranged in a 3x3 grid
compute the RGB mean values for each rectangle
construct a vector x composed by 27 components (three RGB components for each rectangle)
x is the feature vector of the image in the data structure
Well, point 1 and 2 are easy but what should I do at point 3. How do I compose a vector X out of the 27 components (9 * R mean, G mean, B mean.)
And when I succeed to compose the vector, what is the next step I should do with this vector.
Peter
Here is how I think the feature vector is computed:
You have 3 x 3 = 9 rectangles.
Each pixel is essentially 3 numbers, 1 for each of the Red, Green, and Blue color channels.
For each rectangle you compute the mean for the red, green, and blue colors for all the pixels in that rectangle. This gives you 3 numbers for each rectangle.
In total, you have 9 (rectangles) x 3 (mean for R, G, B) = 27 numbers.
Simply concatenate these 27 numbers into a single 27 by 1 (often written as 27 x 1) vector. That is 27 numbers grouped together. This vector of 27 numbers is the feature vector X that represents the color statistic of your photo. In the code, if you are using C++, this will probably be an array of 27 number or perhaps even an instance of the (aptly named) vector class. You can think of this feature vector as some form of "summary" of what the color in the photo is like. Roughly, things look like this: [R1, G1, B1, R2, G2, B2, ..., R9, G9, B9] where R1 is the mean/average of red pixels in the first rectangle and so on.
I believe step 2 involves some form of comparing these feature vectors so that those with similar feature vectors (and hence similar color) will be placed together. Comparison will likely involve the use of the Euclidean distance (see here), or some other metric, to compare how similar the feature vectors (and hence the photos' color) are to each other.
Lastly, as Anony-Mousse suggested, converting your pixels from RGB to HSB/HSV color would be preferable. If you use OpenCV or have access to it, this is simply a one liner code. Otherwise wiki HSV etc. will give your the math formula to perform the conversion.
Hope this helps.
Instead of using RGB, you might want to use HSB space. It gives better results for a wide variety of use cases. Put more weight on Hue to get better color matches for photos, or to brightness when composing high-contrast images (logos etc.)
I have never heard of antipole clustering. But the obvious next step would be to put all the images you have into a large index. Say, an R-Tree. Maybe bulk-load it via STR. Then you can quickly find matches.
Maybe it means vector quantization (vq). In vq the image isn't subdivide in rectangles but in density areas. Then you can take a mean point of this cluster. First off you need to take all colors and pixels separate and transfer it to a vector with XY coordinate. Then you can use a density clustering like voronoi cells and get the mean point. This point can you compare with other pictures in the database. Read here about VQ: http://www.gamasutra.com/view/feature/3090/image_compression_with_vector_.php.
How to plot vector from adjacent pixel:
d(x) = I(x+1,y) - I(x,y)
d(y) = I(x,y+1) - I(x,y)
Here's another link: http://www.leptonica.com/color-quantization.html.
Update: When you have already computed the mean color of your thumbnail you can proceed and sort all the means color in a rgb map and using the formula I give to you to compute the vector x. Now that you have a vector of all your thumbnails you can use the antipole tree to search for a thumbnail. This is possbile because the antipole tree is something like a kd-tree and subdivide the 2d space. Read here about antipole tree: http://matt.eifelle.com/2012/01/17/qtmosaic-0-2-faster-mosaics/. Maybe you can ask the author and download the sourcecode?

Programming Proportional Integrant in C

I'm doing my my project for a course and my goal is to implement the Proportional Integrant Control over a robot to track a line with 12 simple phototransistors. Now I've been reading many PID tutorials but I'm still confused. Can someone help me to start like from what I have been thinking...
I should assign each state of sensors a binary value and then use that in implementing the PI equation for error.... can some friend throw some light?
Assuming the photo transistors are all in a line parallel to the front edge of your 'car', perpendicular to the edge of the track, and individually numbered from 0 - 11...
You want your car's center to follow the line. Sensors #5 and #6 should straddle the line, and therefore be used be used as fine-tuning adjustment. The sensors at the extreme ends (#0 and #11) should have the highest impact on your steering.
With those two bits of info, you should be able to set appropriate weights (multiplication factors) for your PI control to instruct your car to turn left a little, when sensors #7, #8 see the line, or turn left a lot when sensors #9, #10, #11 see the line. The extreme sensors may also affect the speed of your car.
Some things to consider: When implementing a front-wheel steering vehicle, it is often better to mount your sensor strip behind the front wheels. Also, rear-wheel steering vehicles can adjust to sharp corners more quickly, but are less stable at high-speeds.
I'd convert the 12 sensors into a number from 1 to 12. Then try and target a value of 6 in my PID. Then use the output to drive the wheels. Maybe normalize it so you get a +ve number means more right, and a negative means more left.

Determining if a point is on a road

If I have a polyline that describes a road and I know the road width at all parts, is there an algorithm I can use to determine if a point is on the road? I'm not entirely sure how to do this since the line itself has a width of 1px.
thanks,
Jeff
Find the minimum distance of the point to the line (it will be a vector perpendicular to the line). Actual calculation where P0 is the first point of the road segment, v is the road segment vector and w is the vector from P0 to the point in question. You will have to iterate over each edge in the polyline. If the distance is less than the width of that segment, then it is "on" the road.
d = |v x w| / |v|
The corners might be tricky depending on if you treat them as rounded (constant radius) or angular.
Perhaps you could take each line segment, build the rectangle of the line segment + its width, and use rectangle/point collision algorithms to determine if the rectangle contains the point. A good algorithm will account for the width = 1 scenario, which should simply attempt to build the inverse function of the line segment and determine if y-1(point.y) is an x between line_segment.x1 and line_segment.x2

Resources