I have some C++ code that performs calculations and I would like to visualize it.
I'm using windows forms (.NET).
The idea is to perform calculations in C++ and to include .h with chart.
As I need fast update, I use timer. As my data is in C++ I should use some tricks to draw it from .h. I was advised to use BeginInvoke() method, here's my proto code from header:
System::Void ActionD ()
{
for (pts = 0; pts < arrlength; pts++) {
chart1->series1->Points->AddXY(test_array_x[pts], test_array_y[pts]);
}
}
private:
System::Void timer1_Tick(System::Object^ sender, System::EventArgs^ e) {
MethodInvoker^ mi = gcnew MethodInvoker(this,&ActionD);
chart1->Invoke(mi);
//check if timer works:
Beep(300,500);
}
I have some errors: "...MethodInvoker: a delegate constructor expects 1 argument"
Question is if the general concept of code correct and how can I fix that error?
The C++/CLI compiler in older versions of VS don't produce a very good diagnostic for bad delegate constructor calls. The issue is with &ActionD, it needs to be a fully qualified method name, like this:
MethodInvoker^ mi = gcnew MethodInvoker(this, &Form1::ActionD);
Replace "Form1" with the name of your form class if necessary.
And no, the general concept is not correct. You are using a regular Winforms timer, there's no need at all to use BeginInvoke since the code already runs on the main thread. Nor would you be ahead at all by using an asynchronous timer class, it doesn't make the code any faster.
You make your chart fast by filtering the data, only keeping the Points in the series that you actually need to get an accurate chart drawn. Which doesn't take a lot of points, a few hundred up to a thousand is more than enough. Monitors don't have a lot of pixels so using multiple thousands just keeps the Chart control busy for no benefit. Doing that filtering in a worker thread is the way to get ahead.
I've found a little bit similar topic:
How can I update data in a chart in execution time (in C++ builder)?
So I'm doing this inside my timer:
System::Windows::Forms::DataVisualization::Charting::Series^ seriezz1 = chart1->Series[0];
seriezz1->Points->AddXY(test_array_x[pts], test_array_y[pts]);
It compiles, but crashes at start :(
Related
I'm using OxyPlot in my wpf application as line recorder. It's like the LiveDemo example.
On a larg visible data set, I get some UI performance issues and may the whole application could freez. It seems to be PlotModel.InvalidatePlot which is called with to many points to often, but I didn't found a better way.
In deep:
Using OxyPlot 2.0.0
I code all in the PlotModel. The Xaml PlotView is only binding to the PlotModel.
I cyclical collect data in a thread an put them in a DataSource (List of List which are ItemSoure for the LineSeries)
I have a class which calculates cyclical in a thread the presentation for x and y axis and a bit more. After all this stuff, it calls PlotModel.InvalidatePlot.
If I
have more than 100 k points on the display (no matter if in multiple LineSeries or not)
and add 1 DataPoint per LineSeries every 500 ms
and call PlotModel.InvalidatePlot every 200 ms
not only the PlotView has performance issues, also the window is very slow in reaction, even if I call PlotModel.InvalidatePlot (false).
My goal
My goal would be that the Windo / Application is working normally. It should not hang up because of a line recorder. The best would be if it has no performance issues, but I'm skeptical.
What I have found or tested
OxyPlot has Performance guidelines. I'm using ItemsSource with DataPoints. I have also tried adding them directly to the LineSeris.Points, but then the Plot doesn’t refresh anyway (even with an ObservableCollection), so I have to call PlotModel.InvalidatePlot, what results in the same effect. I cannot bind to a defined LineSeries in Xaml because I don’t know how much Lines will be there. Maybe I missed something on adding the points directly?
I have also found a Github issue 1286 which is describing a related problem, but this workaround is slower in my tests.
I have also checked the time which is elapsed on the call of PlotModel.InvalidatePlot, but the count of points does not affect it.
I have checked the UI thread and it seems it have trouble to handle this large set of points
If I zoom in to the plot and display under 20 k Points it looks so
Question:
Is there a way to handle this better, except to call PlotModel.InvalidatePlot much less?
Restrictions:
I also must Update Axis and Annotations. So, I think I will not come around to call PlotModel.InvalidatePlot.
I have found that using the OxyPlot Windows Forms implementation and then displaying it using Windows Form integration in WPF gives much better performance.
e.g.
var plotView = new OxyPlot.WindowsForms.PlotView();
plotView.Model = Plot;
var host = new System.Windows.Forms.Integration.WindowsFormsHost();
host.Child = plotView;
PlotContainer = host;
Where 'Plot' is the PlotModel you call InvalidatePlot() on.
And then in your XAML:
<ContentControl Content="{Binding PlotContainer}"/>
Or however else you want to use your WindowsFormsHost.
I have a similar problem and found that you can use a Decimator in LineSeries. It is documented in the examples: LineSeriesExamples.cs
The usage is like this:
public static PlotModel WithXDecimator()
{
var model = new PlotModel { Title = "LineSeries with X Decimator" };
var s1 = CreateSeriesSuitableForDecimation();
s1.Decimator = Decimator.Decimate;
model.Series.Add(s1);
return model;
}
This may solve the problem on my side, and I hope it helps others too. Unfortunately it is not documented in the documentation
For the moment I ended up with calculating the time for calling InvalidatePlot for the next time. I calculate it with the method given in this answer, wich returns the number of visible points. This rededuce the performance issue, but dosent fix the block on the UI Thread on calling InvalidatePlot.
I'm looking for a method to wait for the GPU to finish its work in DirectX9. Something equivalent to the glFinish command in OpenGL...
I already know that it's not something I should do, but I have to! I'm writing a threaded Graphics Engine integrated in WPF and I need to make sort of an off-screen rendering in order to give a valid surface to a D3DImage. The frames are very long to compute (more than 100ms) and the rendering of the WPF Image sometimes occurs while the frame is not fully computed by my Engine even if I lock everything the right way. I'm almost sure it's just a Finish issue but I didn't find out how to do that.
So far, I tried to launch a DX9 query like this :
using namespace SlimDX.Direct3D9;
public class GraphicsDevice: Device
{
...
public void Finish()
{
var query = new Query(this, QueryType.Event);
EndScene();
while (!query.CheckStatus(true)) ;
}
}
But it does not seem to work...
So, first question without talking about WPF, do you know how to wait for the GPU to finish what has been sent to the driver?
Thanks!
This was the solution.
I was not aware that it actually work!
I used an EventQuery to 'mark' my last call to the GPU.
Then I put some kind of infinite loop flushing the GPU instructions and waiting for the EventQuery to be finally fired by the GPU, using the GetData/CheckStatus methods.
I am using the SharpDX.WPF project for the WPF abilities, it seems like an easy to understand low-overhead library, compared to the Toolkit that comes with SharpDX (which has the same issue!)
First: I fixed the SharpDX.WPF project for the latest SharpDX using the following: https://stackoverflow.com/a/19791534/442833
Then I made the following hacky adjustment to DXElement.cs, a solution that was also done here:
private Query queryForCompletion;
public void Render()
{
if (Renderer == null || IsInDesignMode)
return;
var test = Renderer as D3D11;
if (queryForCompletion == null)
{
queryForCompletion = new Query(test.Device,
new QueryDescription {Type = QueryType.Event, Flags = QueryFlags.None});
}
Renderer.Render(GetDrawEventArgs());
Surface.Lock();
test.Device.ImmediateContext.End(queryForCompletion);
// wait until drawing completes
Bool completed;
var counter = 0;
while (!(test.Device.ImmediateContext.GetData(queryForCompletion, out completed)
&& completed))
{
Console.WriteLine("Yielding..." + ++counter);
Thread.Yield();
}
//Surface.Invalidate();
Surface.AddDirtyRect(new Int32Rect(0, 0, Surface.PixelWidth, Surface.PixelHeight));
Surface.Unlock();
}
Then I render 8000 cubes in a cube pattern...
Yielding...
gets printed to the console quite often, but the flickering is still there.
I am assuming that WPF is nice enough to show the image using a different thread before the rendering is done, not sure though...
This same issue also happens when I use the Toolkit variant of WPF support with SharpDX.
Images to demonstate the issue:
Bad
Better
Almost
Intended
Note: It randomly switches between these old images, randomly. I am also using really old hardware which makes the flickering much more appearant (GeForce Quadro FX 1700)
A made a repo which contains the exact same source-code as I am using to get this issue:
https://github.com/ManIkWeet/FlickeringIssue/
Related to D3DImage locking, note that the D3DImage.TryLock API has rather unconventional semantics which most developers would not expect:
Beware!
You must call Unlock even in the case where TryLock indicates failure (i.e., returns false)
Although perhaps more of an alarming design choice than a bug per se, misunderstanding this behavior will trivially result in D3DImage deadlocks and hangs, and thus might be responsible for much of the frustration people experience in attempting to get D3DImage working properly.
The following code is a correct WPF D3D render with no flicker in my app:
void WPF_D3D_render(IntPtr pSurface)
{
if (TryLock(new Duration(default(TimeSpan))))
{
SetBackBuffer(D3DResourceType.IDirect3DSurface9, pSurface);
AddDirtyRect(new Int32Rect(0, 0, PixelWidth, PixelHeight));
}
Unlock(); // <--- !
}
Yes, this unintuitive code is actually correct; it is the case that that D3DImage.TryLock(0) leaks one internal D3D buffer lock every time it returns failure. You don't have to take my word for it, here's the CLR code from PresentationCore.dll v4.0.30319:
private bool LockImpl(Duration timeout)
{
bool flag = false;
if (_lockCount == uint.MaxValue)
throw new InvalidOperationException();
if (_lockCount == 0)
{
if (timeout == Duration.Forever)
flag = _canWriteEvent.WaitOne();
else
flag = _canWriteEvent.WaitOne(timeout.TimeSpan, false);
UnsubscribeFromCommittingBatch();
}
_lockCount++;
return flag;
}
Notice that the internal _lockCount field is incremented regardless of whether the function returns success or failure. You have to call Unlock() yourself, as shown in the first code example above, if you want to avoid certain deadlock. Failing to do so creates is nasty to debug, too, because the component won't (potentially) deadlock until the next render pass, by which time the relevant evidence is long gone.
The unusual behavior does not seem to be mentioned at MSDN, but to be fair, that documentation doesn't note that you have to call Unlock() if the call is successful, either.
The problem is not the Locking mechanism. Normally you use Present to draw to present the image. Present will wait until all drawing is ready. With D3DImage you are not using the Present() method. Instead of Presenting, you lock, adding a DirtyRect and unlock the D3DImage.
The rendering is done asynchrone so when you are unlocking, the draw actions might not be ready. This is causing the flicker effect. Sometimes you see items half drawn. A poor solution (i've tested with) is adding a small delay before unlocking. It helped a little, but it wasn't a neat solution. It was terrible!
Solution:
I continued with something else; I was expirimenting with MSAA (antialiasing) and the first problem I faced was; MSAA cannot be done on the dx11/dx9 shared texture, so i decided to render to a new texture (dx11) and create a copy to the dx9 shared texture. I slammed my head on the tabel, because now it was anti-aliased AND flicking-free!! Don't forget to call Flush() before adding a dirty rect.
So, creating a copy of the texture: DXDevice11.Device.ImmediateContext.ResolveSubresource(_dx11RenderTexture, 0, _dx11BackpageTexture, 0, ColorFormat); (_dx11BackpageTexture is shared texture) will wait until the rendering is ready and will create a copy.
This is how I got rid of the flickering....
I think you are not locking properly. As far as I understand the MSDN documentation you are supposed to lock during the entire rendering not just at the end of it:
While the D3DImage is locked, your application can also render to the Direct3D surface assigned to the back buffer.
The information you find on the net about D3DImage/SharpDX is somewhat confusing because the SharpDX guys don't really like the way D3DImage is implemented (can't blame them), so there are statements about this being a "bug" on Microsofts side when its actually just improper usage of the API.
Yes, locking during rendering has performance issues, but it is probably not possible to fix them without porting WPF to DirectX11 and implementing something like a SwapChainPanel which is available in UWP apps. (WPF itself still runs on DirectX9)
If the locking is a performance issue for you, one idea I had (but never tested) is that you could render to an offscreen surface and reduce the lock duration to copying that surface over to the D3DImage. No idea if that would help performance wise but its something to try.
I'm working on a pet project solely for the purpose of learning a few API's. It's not intended to have practical value, but rather to be relatively simple excercise to get me comfortable with libpcap, gtk+, and cairo before I use them for anything serious. This is a graphical program, implemented in C and using Gtk+ 2.x. It's eventually going to read frames with pcap (currently I just have a hardcoded test frame), then use cairo to generate pretty pictures using color values generated from the raw packet (at this stage, I'm just using cairo_show_text to print a text representation of the frame or packet). The pictures will then be drawn to a custom widget inheriting from GtkDrawingArea.
My first step, of course, is to get a decent grasp of the Gtk+ runtime environment so I can implement my widget. I've already managed to render and draw text using cairo to my custom widget. Now I'm at the point where I think the widget really needs private storage for things like the cairo_t context pointer and a GdkRegion pointer (I had not planned to use Gdk directly, but my research indicates that it may be necessary in order to call gdk_window_invalidate_region() to force my DrawingArea to refresh once I've drawn a frame, not to mention gdk_cairo_create()). I've set up private storage as a global variable (the horror! Apparently this is conventional for Gtk+. I'm still not sure how this will even work if I have multiple instances of my widget, so maybe I'm not doing this part right. Or maybe the preprocessor macros and runtime environment are doing some magic to give each instance its own copy of this struct?):
/* private data */
typedef struct _CandyDrawPanePrivate CandyDrawPanePrivate;
struct _CandyDrawPanePrivate {
cairo_t *cr;
GdkRegion *region;
};
#define CANDY_DRAW_PANE_GET_PRIVATE(obj)\
(G_TYPE_INSTANCE_GET_PRIVATE((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPanePrivate))
Here's my question: Initializing the pointers in my private data struct depends on members inherited from the parent, GtkWidget:
/* instance initializer */
static void candy_draw_pane_init(CandyDrawPane *pane) {
GdkWindow *win = NULL;
/*win = gtk_widget_get_window((GtkWidget *)pane);*/
win = ((GtkWidget*)pane)->window;
if (!win)
return;
/* TODO: I should probably also check this return value */
CandyDrawPanePrivate *priv = CANDY_DRAW_PANE_GET_PRIVATE(((CandyDrawPane*)pane));
priv->cr = gdk_cairo_create(win);
priv->region = gdk_drawable_get_clip_region(win);
candy_draw_pane_update(pane);
g_timeout_add(1000, candy_draw_pane_update, pane);
}
When I replaced my old code, which called gdk_cairo_create() and gdk_drawable_get_clip_region() during my event handlers, with this code, which calls them during candy_draw_pane_init(), the application would no longer draw. Stepping through with a debugger, I can see that pane->window and pane->parent are both NULL pointers while we are within candy_draw_pane_init(). The pointers are valid later, in the Gtk event processing loop. This leads me to believe that the inherited members have not yet been initialized when my derived class' "_init()" method is called. I'm sure this is just the nature of the Gtk+ runtime environment.
So how is this sort of thing typically handled? I could add logic to my event handlers to check priv->cr and priv->region for NULL, and call gdk_cairo_create() and gdk_drawable_get_clip_region() if they are still NULL. Or I could add a "post-init" method to my CandyDrawPane widget and call it explicitly after I call candy_draw_pane_new(). I'm sure lots of other people have encountered this sort of scenario, so is there a clean and conventional way to handle it?
This is my first real foray into object-oriented C, so please excuse me if I'm using any terminology incorrectly. I think one source of my confusion is that Gtk has separate concepts of instance and class initialization. C++ may do something similar "under the hood," but if so, it isn't as obvious to the coder.
I have a feeling that if this was C++, most of the the code that's going into candy_draw_pane_init() would be in the class constructor, and any secondary initialization that depended on the constructor having completed would go into an "Init()" method (which of course is not a feature of the language, but just a commonly used convention). Is there an analogous convention for Gtk+? Or perhaps someone can give a good overview of the flow of control when these widgets are instantiated. I have not been very impressed with the quality of the official Gnome documentation. Much of it is either too high-level, contains errors and typos in code, or has broken links or missing examples. And of course the heavy use of macros makes it a little harder to follow even my own code (in this respect it reminds me of Win32 GUI development). In short, I'm sure I can struggle through this on my own and make it work, but I'd like to hear from someone experienced with Gtk+ and C what the "right" way to do this is.
For completeness, here is the header where I set up my custom widget:
#ifndef __GTKCAIRO_H__
#define __GTKCAIRO_H__ 1
#include <gtk/gtk.h>
/* Following tutorial; see gtkcairo.c */
/* Not sure about naming convention; may need revisiting */
G_BEGIN_DECLS
#define CANDY_DRAW_PANE_TYPE (candy_draw_pane_get_type())
#define CANDY_DRAW_PANE(obj) (G_TYPE_CHECK_INSTANCE_CAST ((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPane))
#define CANDY_DRAW_PANE_CLASS(klass) (G_TYPE_CHECK_CLASS_CAST ((klass)CANDY_DRAW_PANE_TYPE, CandyDrawPaneClass))
#define IS_CANDY_DRAW_PANE(obj) (G_TYPE_CHECK_INSTANCE_TYPE ((obj), CANDY_DRAW_PANE_TYPE))
#define IS_CANDY_DRAW_PANE_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE ((klass), CANDY_DRAW_PANE_TYPE))
// official gtk tutorial, which seems to be of higher quality, does not use this.
// #define CANDY_DRAW_PANE_GET_CLASS(obj) (G_TYPE_INSTANCE_GET_CLASS ((obj), CANDY_DRAW_PANE_TYPE, CandyDrawPaneClass))
typedef struct {
GtkDrawingArea parent;
/* private */
} CandyDrawPane;
typedef struct {
GtkDrawingAreaClass parent_class;
} CandyDrawPaneClass;
/* method prototypes */
GtkWidget* candy_draw_pane_new(void);
GType candy_draw_pane_get_type(void);
void candy_draw_pane_clear(CandyDrawPane *cdp);
G_END_DECLS
#endif
Any insight is much appreciated. I do realize I could use a code-generating IDE and crank something out more quickly, and probably dodge having to deal with some of this stuff, but the whole point of this exercise is to get a good grasp of the Gtk runtime, so I'd prefer to write the boilerplate by hand.
This article, A Gentle Introduction to GObject Construction, may help you. Here are some tips that I thought of while looking at your code and your questions:
If your priv->cr and priv->region pointers have to change whenever the widget's GDK window changes, then you could also move that code into a signal handler for the notify::window signal. notify is a signal that fires whenever an object's property is changed, and you can narrow down the signal emission to listen to a specific property by appending it to the name of the signal like that.
You don't need to check the return value from the GET_PRIVATE macro. Looking at the source code for g_type_instance_get_private(), it can return NULL in the case of an error, but it's really unlikely, and will print warnings to the terminal. My feeling is that if GET_PRIVATE returns NULL then something has gone really wrong and you won't be able to recover and continue executing the program anyway.
You're not setting up private storage as a global variable. Where are you declaring this global variable? I only see a struct and typedef declaration at the global level. What you are most likely doing, and what is the usual practice, is calling g_type_class_add_private() in the class_init function. This reserves space within each object for your private struct. Then when you need to use it, g_type_instance_get_private() gives you a pointer to this space.
The init method is the equivalent to a constructor in C++. The class_init method has no equivalent, because all the work done there is done behind the scenes in C++. For example, in a class_init function, you might specify which functions override the parent class's virtual functions. In C++, you simply do this by defining a method in the class with the same name as the virtual method you want to override.
As far as I can tell, the only problem with your code is the fact that the GdkWindow of a GtkWidget (widget->window) is only set when the widget has been realized, which normally happens when gtk_widget_show is called. You can tell it to realize earlier by calling gtk_widget_realize, but the documentation recommends connecting to the draw or realize signal instead.
Is there some reason that identical math operations would take significantly longer in one Silverlight app than in another?
For example, I have some code that takes a list of points and transforms them (scales and translates them) and populates another list of points. It's important that I keep the original points intact, hence the second list.
Here's the relevant code (scale is a double and origin is a point):
public Point transformPoint(Point point) {
// scale, then translate the x
point.X = (point.X - origin.X) * scale;
// scale, then translate the y
point.Y = (point.Y - origin.Y) * scale;
// return the point
return point;
}
Here's how I'm doing the loop and timing, in case it's important:
DateTime startTime = DateTime.Now;
foreach (Point point in rawPoints) transformedPoints.Add(transformPoint(point));
Debug.Print("ASPX milliseconds: {0}", (DateTime.Now - startTime).Milliseconds);
On a run of 14356 points (don't ask, it's modeled off a real world number in the desktop app), the breakdown is as follows:
Silverlight app #1: 46 ms
Silverlight app #2: 859 ms
The first app is an otherwise empty app that is doing the loop in the MainPage constructor. The second is doing the loop in a method in another class, and the method is called during an event handler in the GUI thread, I think. But should any of that matter, considering that identical operations are happening within the loop itself?
There maybe something huge I'm missing in how threading works or something, but this discrepancy doesn't make sense to me at all.
In addition to the other comments and answers I'm going to read between the lines a little.
In the first app you have pretty much this code in isolation running in the MainPage constructor. IWO you've create a fresh Silverlight app and slapped this code in it and thats it.
In the second app you have more actual real world stuff. At the very least you have this code running as the result of a button click on a rudimentory UI. Therein lies the clue.
Take a blank app and drop a button on it. Run it and click the button, what does the button do? There are animations attached to visual states of the button. This animation (or other animations or loops) are likely running in parrallel with your code when you click the button. Timers (whether you do it properly with StopWatch or not) record elapsed time, not just the time your thread takes. Hence when other threads are doing other things (like animations) your timing will be off.
My first suspicion would be that Silverlight App #2 triggers a garbage collection. Scaling ~15,000 points should be taking a millisecond, not nearly a second.
Try to reduce memory allocations in your code. Can transformedPoints be an array, rather than a dynamically grown data structure?
You can also look at the GC performance counters, but simply reducing the memory allocation may turn out to be simpler.
Could it be possible your code is not being inlined in the CLR by the app that is running slower?
I'm not sure how the CLR in SL handles inlining, but here is a link to some of the prerequisites for inlining in 3.5 SP1.
http://udooz.net/blog/2009/04/clr-improvements-in-net-35-sp1/