Does The Windows Phone 7 Emulator Show A Watermark? - silverlight

Does the Windows Phone 7 emulator show a watermark along the upper right edge of the display?
At first I thought it might be nested grids not overlapping properly and creating a thin vertical column of visual garbage in the upper right. But after creating a new project and zooming in on both garbage areas it turns out they are numbers.
Existing project: "000 000 000006 001 000 00.0000"
New, empty project: "000 000 003296 002 001 00.0967"
What are these? Any way to disable their visibility?

These are frame rate counters used for measuring performance. Details here.
Jeff Wilcox – Frame rate counters in Windows Phone
The post also shows how to disable their display.

In the App constructor in App.xaml.cs is generated code. Comment out these lines, like so:
public App()
{
UnhandledException += Application_UnhandledException;
// Uncomment to show graphics profiling information while debugging.
if (System.Diagnostics.Debugger.IsAttached)
{
// Display the current frame rate counters.
//Application.Current.Host.Settings.EnableFrameRateCounter = true;
// Show the areas of the app that are being redrawn in each frame.
//Application.Current.Host.Settings.EnableRedrawRegions = true;
// Enable non-production analysis visualization mode,
// which shows areas of a page that are being GPU accelerated with a colored overlay.
//Application.Current.Host.Settings.EnableCacheVisualization = true;
}

Related

OxyPlot performance issue on larg data in WPF on InvalidatePlot

I'm using OxyPlot in my wpf application as line recorder. It's like the LiveDemo example.
On a larg visible data set, I get some UI performance issues and may the whole application could freez. It seems to be PlotModel.InvalidatePlot which is called with to many points to often, but I didn't found a better way.
In deep:
Using OxyPlot 2.0.0
I code all in the PlotModel. The Xaml PlotView is only binding to the PlotModel.
I cyclical collect data in a thread an put them in a DataSource (List of List which are ItemSoure for the LineSeries)
I have a class which calculates cyclical in a thread the presentation for x and y axis and a bit more. After all this stuff, it calls PlotModel.InvalidatePlot.
If I
have more than 100 k points on the display (no matter if in multiple LineSeries or not)
and add 1 DataPoint per LineSeries every 500 ms
and call PlotModel.InvalidatePlot every 200 ms
not only the PlotView has performance issues, also the window is very slow in reaction, even if I call PlotModel.InvalidatePlot (false).
My goal
My goal would be that the Windo / Application is working normally. It should not hang up because of a line recorder. The best would be if it has no performance issues, but I'm skeptical.
What I have found or tested
OxyPlot has Performance guidelines. I'm using ItemsSource with DataPoints. I have also tried adding them directly to the LineSeris.Points, but then the Plot doesn’t refresh anyway (even with an ObservableCollection), so I have to call PlotModel.InvalidatePlot, what results in the same effect. I cannot bind to a defined LineSeries in Xaml because I don’t know how much Lines will be there. Maybe I missed something on adding the points directly?
I have also found a Github issue 1286 which is describing a related problem, but this workaround is slower in my tests.
I have also checked the time which is elapsed on the call of PlotModel.InvalidatePlot, but the count of points does not affect it.
I have checked the UI thread and it seems it have trouble to handle this large set of points
If I zoom in to the plot and display under 20 k Points it looks so
Question:
Is there a way to handle this better, except to call PlotModel.InvalidatePlot much less?
Restrictions:
I also must Update Axis and Annotations. So, I think I will not come around to call PlotModel.InvalidatePlot.
I have found that using the OxyPlot Windows Forms implementation and then displaying it using Windows Form integration in WPF gives much better performance.
e.g.
var plotView = new OxyPlot.WindowsForms.PlotView();
plotView.Model = Plot;
var host = new System.Windows.Forms.Integration.WindowsFormsHost();
host.Child = plotView;
PlotContainer = host;
Where 'Plot' is the PlotModel you call InvalidatePlot() on.
And then in your XAML:
<ContentControl Content="{Binding PlotContainer}"/>
Or however else you want to use your WindowsFormsHost.
I have a similar problem and found that you can use a Decimator in LineSeries. It is documented in the examples: LineSeriesExamples.cs
The usage is like this:
public static PlotModel WithXDecimator()
{
var model = new PlotModel { Title = "LineSeries with X Decimator" };
var s1 = CreateSeriesSuitableForDecimation();
s1.Decimator = Decimator.Decimate;
model.Series.Add(s1);
return model;
}
This may solve the problem on my side, and I hope it helps others too. Unfortunately it is not documented in the documentation
For the moment I ended up with calculating the time for calling InvalidatePlot for the next time. I calculate it with the method given in this answer, wich returns the number of visible points. This rededuce the performance issue, but dosent fix the block on the UI Thread on calling InvalidatePlot.

How to optimize the rendering on my threejs program

I've been working on a rubik's cube program, and the rendering works fine on my macbook pro/google chrome browser. For computers with slower GPUs, however, my cube breaks because the rendering is slower than the animations. I've tried looking up ways to optimize the rendering and haven't had any success yet.
Rubik's Cube live link
Github code repository
I'm using react and the componentDidMount function is line 1463 where the cube meshes get generated and the animate function is line 1539. Appreciate any help, thanks!
This all depends on how you're building your geometry. I ran a WebGL inspector on your live link, and I'm getting 182 drawcalls for a 3x3x3 cube. This is unnecessarily large, since a 3x3 should only have 54 faces. Each time you add a new Mesh to the scene, it creates a new drawcall on render, and having too many drawcalls is almost always the primary reason for slow performance in WebGL.
You should consider nesting each face into its corresponding "cubelet" (one black cube with 1, 2, or 3 colored faces), as done in this Roobik's demo. This way you'd only have to draw 27 cubes + 54 faces = 81 drawcalls! The pseudocode would go something like this:
for (let i = 0; i < 27; i++) {
let cubelet = new THREE.Mesh(geom, mat);
let face1 = new THREE.Mesh(geom, faceMat1);
let face2 = new THREE.Mesh(geom, faceMat2);
let face3 = new THREE.Mesh(geom, faceMat3);
cubelet.add(face1);
cubelet.add(face2);
cubelet.add(face3);
scene.add(cubelet);
}
Of course, you'd need to set the positions and rotations, to get a result like this:
Secondly, you're going to have to re-think the way your animate() function is set up. You're starting the next rotation after a set time has progressed, but instead you should start the next rotation only after the first one is complete. Have you considered using an animation library like GSAP?

iOS 11 SceneKit hitTest:options: fails

I'm facing a difficult situation using hitTest:options: in SceneKit on iOS 11.
In a maping application I have a terrain node. Using hitTest:options: I was able for long to spot a point on the terrain from a touch on the screen. It still work as expected with released binary on iOS 11, and also on Xcode 9 compiled binary for iOS 10 simulator.
But iOS 11 binary on iOS 11 SDK gives totaly eratic results. Return array from hitTest:options: may contain no result or too many. Moreover, most of the time none of the results is valid. Here below are images to illustrate the point. All image are from a scene with no hidden node.
Edit: I made a test today using hitTestWithSegmentFromPoint:toPoint:options: and got false results also.
First with working simulator.
It shows a normal hit on the terrain. The hit point is illustrated with a red ball. It is half inset in the terrain as its center is right on the terrain.
These two images show a case where the "ray" cross the terrain 3 times. We got 3 hits all placed correctly on the terrain.The second image change the angle of view to show the 3 points.
Now the failing iOS 11 situation:
On this picture we got one hit but it is "nowhere" between the two mountains, not on the terrain.
The last two pictures show other attempts with 4 and 16 hits, all "in the blue" with no connection to the terrain.
Sometimes the hit are "away" past the terrain, sometimes they are between the camera and the terrain.
I was facing the same problem on iOS 11. My solution:
var hitTestOptions = [SCNHitTestOption.sortResults : NSNumber(value: true),
SCNHitTestOption.boundingBoxOnly : NSNumber(value: true)]
if #available(iOS 11.0, *) {
hitTestOptions[SCNHitTestOption.searchMode] = SCNHitTestSearchMode.all.rawValue as NSNumber
}
Four years latter I went back to this problem and found a solution to my original problem.
After Apple released iOS 11.2, multiples hits were solved but we got a "no hits" conundrum.
The problem lies in a specific situation that was not fully explained in the original question. After a terrain is originally computed and displayed we always get a first hit. Then we pan the terrain to center the hit point and rebuild a new terrain sector. In the process, we save computing by reusing severals geometry elements, only changing the z coordinates of the terrain vertexes. The problem lies in reusing the triangle strip SCNGeometryElement. From now on, any terrain built by reusing this object is fine looking but fails the hitTest method.
It turns out that the SCNGeometryElement can't be reused and should be rebuilt.
The originally working code was :
t_strip = [geom_cour geometryElementAtIndex:0];
To workaround the HitTest: failure we have to do :
//get current triangle strip
SCNGeometryElement *t_strip_g = [geom_cour geometryElementAtIndex:0];
//create a new one using the current as a template
t_strip = [SCNGeometryElement geometryElementWithData:t_strip_g.data
primitiveType:t_strip_g.primitiveType
primitiveCount:t_strip_g.primitiveCount
bytesPerIndex:t_strip_g.bytesPerIndex];
The current SCNGeometryElement is used as a template to recreate a new one with exactly the same values.

ios 6 MapKit annotation rotation

Our app has a rotating map view which aligns with the compass heading. We counter-rotate the annotations so that their callouts remain horizontal for reading. This works fine on iOS5 devices but is broken on iOS6 (problem seen with same binary as used on iOS5 device and with binary built with iOS6 SDK). The annotations initially rotate to the correct horizontal position and then a short time later revert to the un-corrected rotation. We cannot see any events that are causing this. This is the code snippet we are using in - (MKAnnotationView *)mapView:(MKMapView *)theMapView viewForAnnotation:(id )annotation
CATransform3D transformZ = CATransform3DIdentity;
transformZ = CATransform3DRotate(transformZ, _rotationZ, 0, 0, 1);
annotation.myView.layer.transform = transformZ;
Anyone else seen this and anyone got any suggestions on how to fix it on iOS6?
I had an identical problem so my workaround may work for you. I've also submitted a bug to Apple on it. For me, every time the map got panned by the user the Annotations would get "unrotated".
In my code I set the rotations using CGAffineTransformMakeRotation and I don't set it in viewForAnnotation but whenever the users location get's updated. So that is a bit different than you.
My workaround was to add an additional minor rotation at the bottom of my viewForAnnotation method.
if(is6orMore) {
[annView setTransform:CGAffineTransformMakeRotation(.001)]; //iOS6 BUG WORKAROUND !!!!!!!
}
So for you, I'm not sure if that works, since you are rotating differently and doing it in viewForAnnotation. But give it a try.
Took me forever to find and I just happened across this fix.

About finding pupil in a video

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

Resources