Problems with diagnostics of prophet forecast - dataset

I am working with an dataset of crimes in chicago and specially working on a future prediction of the crime rate in chicago (from 2012 till 2016 I have data). I generated a forecast using the prophet package of facebook. It worked very well and all done. Now I would like to train and test my model. Therefore, I split the dataset into 70% train and 30% test. I trained the model and test it and at the end I got a nice plot. I am further interested in the diagnostic part. Prophet provides for that a function called cross_validation() which I used: df.cv<- cross_validation(m, initial = nrow(trainData), period = 365, horizon = nrow(testData), units = 'days')
. The problems is here, I am geting always this error and trying since yesterday to fix it, without success:
Fehler in generate_cutoffs(df, horizon.dt, initial.dt, period.dt) :
Less data than horizon after initial window. Make horizon or initial shorter.
Does somebody know how to fix this error and provide a list of diagnostics?
My train/test plot looks so:
And my train Dataset can be downloaded here: https://ufile.io/4e38c
And my test Dataset here: https://ufile.io/ds65p
I hope somebody could help me! It would be really great and I would really appreciate it. Thanks in advance!

Cross-validation will be applied on a sliding window, performing cutoffs based on the settings. Please read the docs here:
https://facebook.github.io/prophet/docs/diagnostics.html
The error you get because your sliding window is out of bounds. Try like this:
df.cv<- cross_validation(m, initial = 100, period = 100, horizon = 100, units = 'days')

I had similar issue and I managed to fix it by using string arguments such as horizon="365 days" , instead of int horizon = 365.
This solution worked on Python version.

Related

OxyPlot performance issue on larg data in WPF on InvalidatePlot

I'm using OxyPlot in my wpf application as line recorder. It's like the LiveDemo example.
On a larg visible data set, I get some UI performance issues and may the whole application could freez. It seems to be PlotModel.InvalidatePlot which is called with to many points to often, but I didn't found a better way.
In deep:
Using OxyPlot 2.0.0
I code all in the PlotModel. The Xaml PlotView is only binding to the PlotModel.
I cyclical collect data in a thread an put them in a DataSource (List of List which are ItemSoure for the LineSeries)
I have a class which calculates cyclical in a thread the presentation for x and y axis and a bit more. After all this stuff, it calls PlotModel.InvalidatePlot.
If I
have more than 100 k points on the display (no matter if in multiple LineSeries or not)
and add 1 DataPoint per LineSeries every 500 ms
and call PlotModel.InvalidatePlot every 200 ms
not only the PlotView has performance issues, also the window is very slow in reaction, even if I call PlotModel.InvalidatePlot (false).
My goal
My goal would be that the Windo / Application is working normally. It should not hang up because of a line recorder. The best would be if it has no performance issues, but I'm skeptical.
What I have found or tested
OxyPlot has Performance guidelines. I'm using ItemsSource with DataPoints. I have also tried adding them directly to the LineSeris.Points, but then the Plot doesn’t refresh anyway (even with an ObservableCollection), so I have to call PlotModel.InvalidatePlot, what results in the same effect. I cannot bind to a defined LineSeries in Xaml because I don’t know how much Lines will be there. Maybe I missed something on adding the points directly?
I have also found a Github issue 1286 which is describing a related problem, but this workaround is slower in my tests.
I have also checked the time which is elapsed on the call of PlotModel.InvalidatePlot, but the count of points does not affect it.
I have checked the UI thread and it seems it have trouble to handle this large set of points
If I zoom in to the plot and display under 20 k Points it looks so
Question:
Is there a way to handle this better, except to call PlotModel.InvalidatePlot much less?
Restrictions:
I also must Update Axis and Annotations. So, I think I will not come around to call PlotModel.InvalidatePlot.
I have found that using the OxyPlot Windows Forms implementation and then displaying it using Windows Form integration in WPF gives much better performance.
e.g.
var plotView = new OxyPlot.WindowsForms.PlotView();
plotView.Model = Plot;
var host = new System.Windows.Forms.Integration.WindowsFormsHost();
host.Child = plotView;
PlotContainer = host;
Where 'Plot' is the PlotModel you call InvalidatePlot() on.
And then in your XAML:
<ContentControl Content="{Binding PlotContainer}"/>
Or however else you want to use your WindowsFormsHost.
I have a similar problem and found that you can use a Decimator in LineSeries. It is documented in the examples: LineSeriesExamples.cs
The usage is like this:
public static PlotModel WithXDecimator()
{
var model = new PlotModel { Title = "LineSeries with X Decimator" };
var s1 = CreateSeriesSuitableForDecimation();
s1.Decimator = Decimator.Decimate;
model.Series.Add(s1);
return model;
}
This may solve the problem on my side, and I hope it helps others too. Unfortunately it is not documented in the documentation
For the moment I ended up with calculating the time for calling InvalidatePlot for the next time. I calculate it with the method given in this answer, wich returns the number of visible points. This rededuce the performance issue, but dosent fix the block on the UI Thread on calling InvalidatePlot.

D3 v4 graph simulation keeps dancing

My graph contains a high number of links among a high number of nodes (300 nodes).
Since I upgrade D3 from v3 to v4 and adjusted to the new API and concepts, the graph keeps going in dancing mode.
Here is a brief screencast that shows the effect:
https://www.youtube.com/watch?v=DCkBMzs1wWI
I have tried to alternatively remove the forces below:
collide force
center force
change force
.. butt hey don't seem to be the culprit of this issue. The dancing seems to relate to the link force only.
This is how my simulation is defined:
// force definitions
this.forceCharge = d3.forceManyBody();
this.forceCenter = d3.forceCenter(1800,1200);
this.forceCollide = d3.forceCollide().radius(10);
this.forceLink = d3.forceLink().id(function(d) {return d.id;});
// simulation definition
this.simulation = d3.forceSimulation()
.force("charge", this.forceCharge)
.force("link", this.forceLink)
.force("center", this.forceCenter)
.force("collide", this.forceCollide)
.nodes(this.nodes)
.nodes(this.links)
.on("tick", this.tick)
.on("end", this.end);
It is to be said that the link strength and distance properties were modified to various settings without any help. I have tried with strength from 0.1 to 1 but that did not help either.
I have also noticed:
the simulation
The end event does not get triggered, meaning that the simulation keeps
going on for a very long time
I could call the simulation.stop() event. It does stop the dancing but I have to restart it with data changes and the dancing would start again
UPDATE:
removing the collision detection feature seems to calm down the dancing, but the resulting graph shows that most nodes overlap due tot the fact they have similar connections, creating a geometry that forces them to the same place

ios 6 MapKit annotation rotation

Our app has a rotating map view which aligns with the compass heading. We counter-rotate the annotations so that their callouts remain horizontal for reading. This works fine on iOS5 devices but is broken on iOS6 (problem seen with same binary as used on iOS5 device and with binary built with iOS6 SDK). The annotations initially rotate to the correct horizontal position and then a short time later revert to the un-corrected rotation. We cannot see any events that are causing this. This is the code snippet we are using in - (MKAnnotationView *)mapView:(MKMapView *)theMapView viewForAnnotation:(id )annotation
CATransform3D transformZ = CATransform3DIdentity;
transformZ = CATransform3DRotate(transformZ, _rotationZ, 0, 0, 1);
annotation.myView.layer.transform = transformZ;
Anyone else seen this and anyone got any suggestions on how to fix it on iOS6?
I had an identical problem so my workaround may work for you. I've also submitted a bug to Apple on it. For me, every time the map got panned by the user the Annotations would get "unrotated".
In my code I set the rotations using CGAffineTransformMakeRotation and I don't set it in viewForAnnotation but whenever the users location get's updated. So that is a bit different than you.
My workaround was to add an additional minor rotation at the bottom of my viewForAnnotation method.
if(is6orMore) {
[annView setTransform:CGAffineTransformMakeRotation(.001)]; //iOS6 BUG WORKAROUND !!!!!!!
}
So for you, I'm not sure if that works, since you are rotating differently and doing it in viewForAnnotation. But give it a try.
Took me forever to find and I just happened across this fix.

About finding pupil in a video

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

Strange OpenCV Distance Transform Results

I'm trying to run a distance transform on a thresholded binary image in
order to assist anomaly detection (my hope is that I can detect large
changes around the edges of the object), however for some reason, upon
running my Distance Transform script, I'm getting a strange banding type of
effect. I tested something similar in the Distance Transform demo script in
the samples directory, with the same results. One possible reason I came up
with was that the distance was going beyond the 0-255 scale and therefore
essentially being modulus'ed to keep it within the boundaries. Has anyone
had any experience with this that could advise?
I have posted images and code on my blog if that helps
Thanks in advance,
Ian
One quick way to test your theory: try with a grey scale image that's muted (all values v --> 128+(v-128)/32 or something) and see if that makes the bands much wider or eliminates them completely.
It's always a good idea to nail down what the problem is first, and then try to fix it.
I can't help with the code, but I'd like to point out that the expected result on your blog is probably incorrect as well: look at the sharp black-gray border in the bottom part of the large object: it should not be there, as the maximum difference between two adjacent pixels should be 1.

Resources