ToolTip in ZedGraph refreshes continuously and uses a significant amount of CPU - winforms

I am using ZedGraph to show Japanese candles. I set the GraphPane.isShowPointValue=true, but when I move my mouse on the candle, the tooltip is refreshing and refreshing.
I find that when the tooltip is shown, it always takes more than 50% CPU time.
How can I solve this?

At this point there is a newer version of ZedGraph that fixes this problem.
currently v5.1.7
https://www.nuget.org/packages/ZedGraph/

I had the same issue with application developed several years ago for Win XP when users started migration to Win 7.
The above mentioned path did not help me, so I wrote quick and dirty workaround:
double prevMouseX = 0; // for storing previos cursor position
double prevMouseY = 0; //
private bool ZedGraphControl1MouseMoveEvent(ZedGraphControl sender, MouseEventArgs e)
{
PointF mousePt = new PointF( e.X, e.Y );
GraphPane pane = sender.MasterPane.FindChartRect( mousePt );
if ( pane != null )
{
double x, y;
pane.ReverseTransform( mousePt, out x, out y );
if ((x == prevMouseX) && (y == prevMouseY))
{
// Do nothing if the mouse position didn't change
return false;
}
else {
prevMouseX = x;
prevMouseY = y;
}
// Our code for toolTip goes here
...

Take a look into this, the patch described in this link might resolve the issue:
http://sourceforge.net/tracker/?func=detail&aid=3061209&group_id=114675&atid=669144

Related

Why am I missing controls with bit

Simple problem.
I have a form to which I add a panel and put 1 label with some text. When I look at the saved image, all I see is the panel. I have tried all the solutions I could find. Code below. I get the panel saved but the text box doesn't appear. If I can get that to work, then I can do all that I need.
What am I doing wrong?
int x = SystemInformation.WorkingArea.X;
int y = SystemInformation.WorkingArea.Y;
int width = printPanel.Width;
int height = printPanel.Height;
Rectangle bounds = new Rectangle(x, y, width, height);
using (Bitmap flag = new Bitmap(width, height))
{
printPanel.DrawToBitmap(flag, bounds);
if (Environment.UserName == "grimesr")
{
string saveImage = Path.Combine(fileStore, "./p" + ".png");
flag.Save(saveImage);
}
}
Really not sure where you're going wrong.
Here's a simple test:
private void button1_Click(object sender, EventArgs e)
{
int width = printPanel.Width;
int height = printPanel.Height;
Rectangle bounds = new Rectangle(0, 0, width, height);
Bitmap flag = new Bitmap(width, height);
printPanel.DrawToBitmap(flag, bounds);
pictureBox1.Image = flag;
}
It grabs the entire panel and puts the image into the picturebox to the right:
Thanks for the hints, even if they weren't directly related. The print button helped me figure this out. The button code worked as desired. However, putting the same code where I had it wasn't working. I then noticed a InvalidOperation error was being rasied. So looking at more in detail led me to see what the real issue was.
I must admit I left out 1 tiny piece of information that was critical. I was trying to do this in a thread that was feeding my label printer. Of course, trying to used a UI panel control in the thread threw an Invalid Operation. I moved the code out and all is well now. Cross thread operations are sometimes subtle and difficult to think about how they fail.
Problem solved for me.

GetWindowRect returning wrong coordinaties

i'm developing VSTO add-in for outlook which includes overlay on top of the window.
I'm building my UI using WPF.
Problem is that when i'm trying to attach WPF Window ( merge left/top/width/height ) to outlook window when STARTING at scale more than 100% GetWindowsRect Returns wrong rectangle.
BUT when i'm starting application at 100% scale then change windows scale at runtime to whatever value everything is good and DPI Aware. Both cases ( starting and runtime ) GetDpiForWindow returns correct values which is...strange. DPI Awareness is set using SetThreadDpiAwareness when forms are created.
Can't get my head what's wrong :<. Any advises appreciated.
Code for attaching:
public void AttachTo(IntPtr src, AttachFlagEnum flags)
{
var nativeRectangle = new WinAPI.RECT();
if (!WinAPI.GetWindowRect(src, ref nativeRectangle))
{
// throw new Win32Exception(Marshal.GetLastWin32Error());
return;
}
AttachToCoords(new Rectangle(nativeRectangle.Left, nativeRectangle.Top, nativeRectangle.Right - nativeRectangle.Left, nativeRectangle.Bottom - nativeRectangle.Top), flags);
}
Form create code:
private void ThisAddIn_Startup(object sender, System.EventArgs e)
{
StateManager.Init();
OutlookUtils.WaitOutlookLoading();
using (var ctx = new DPIContextBlock(WinAPI.DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE))
{
new Forms.One().Show();
new Forms.Overlay().Show();
new Forms.Two();
}
}
Overlay attach code (executes by timer )
private void OverlayThink(object ob)
{
if (Managers.StateManager.OutlookState == OutlookStateEnum.MINIMIZED || Managers.StateManager.UiState == UIStateEnum.DESCWND)
{
if (this.IsVisible)
{
this.Dispatcher.Invoke(() => this.Hide());
}
return;
}
this.Dispatcher.Invoke(() => this.AttachTo(Utils.OutlookUtils.GetWordWindow(), AttachFlagEnum.OVERLAY));
this.Dispatcher.Invoke(() => this.Show());
}
My solution to the problem was that AttachToCoords method sets coords from GetWindowRect directly to Window.Left | Right | Top | Bottom. That's wrong because internally WPF positions it's elements in 96 DPI coordinate system. So i was need to convert it before assigning.
Solution:
private Rectangle TransformCoords(Rectangle coords)
{
var source = PresentationSource.FromVisual(this);
coords.X = (int)(coords.X / source.CompositionTarget.TransformToDevice.M11);
coords.Y = (int)(coords.Y / source.CompositionTarget.TransformToDevice.M22);
coords.Width = (int)(coords.Width / source.CompositionTarget.TransformToDevice.M11);
coords.Height = (int)(coords.Height / source.CompositionTarget.TransformToDevice.M22);
return coords;
}
WPF (as well as Windows forms) should be scaled automatically depending on the DPI value set on the system. There is no need to calculate the size and positions of the dialog window in Outlook add-ins.
Instead, you need to set up the form correctly to follow the DPI settings and set the window parent, so it will be displayed on top of the Outlook window.

How to detect a 'pinch out' in a list of containers?

I want to be able to pinch two containers in a list of containers away from each other to insert a new empty container between them. Similar to how the iPhone app “Clear” inserts new tasks (see for example the very first picture on this page https://www.raywenderlich.com/22174/how-to-make-a-gesture-driven-to-do-list-app-part-33 - the small red container is inserted when the two sorounding containers are pinched away from each other). Any hints on how I can achieve this in Codename One?
Normally you would override the pinch method to implement pinch to zoom or similar calls. However, this won't work in this case as the pinch will exceed component boundaries and it wouldn't work.
The only way I can think of doing this is to override the pointerDragged(int[],int[]) method in Form and detect the pinch motion as growing to implement this. You can check out the code for pinch in Component.java as it should be a good base for this:
public void pointerDragged(int[] x, int[] y) {
if (x.length > 1) {
double currentDis = distance(x, y);
// prevent division by 0
if (pinchDistance <= 0) {
pinchDistance = currentDis;
}
double scale = currentDis / pinchDistance;
if (pinch((float)scale)) {
return;
}
}
pointerDragged(x[0], y[0]);
}
private double distance(int[] x, int[] y) {
int disx = x[0] - x[1];
int disy = y[0] - y[1];
return Math.sqrt(disx * disx + disy * disy);
}
Adding the entry is simple, just place a blank component in the place and grow its preferred size until it reaches the desired size.

Snapping a SurfaceListBox

I'm looking to create a scrolling surfacelistbox which automatically snaps into a position after a drag is finished so that the center item on the screen is centered itself in the viewport.
I've gotten the center item, but now as usual the way that WPF deals with sizes, screen positions, and offsets has me perplexed.
At the moment I've chosen to subscribe to the SurfaceScrollViewer's ManipulationCompleted event, as that seems to consistently fire after I've finished a scroll gesture (whereas the ScrollChanged event tends to fire early).
void ManipCompleted(object sender, ManipulationCompletedEventArgs e)
{
FocusTaker.Focus(); //reset focus to a dummy element
List<FrameworkElement> visibleElements = new List<FrameworkElement>();
for (int i = 0; i < List.Items.Count; i++)
{
SurfaceListBoxItem item = List.ItemContainerGenerator.ContainerFromIndex(i) as SurfaceListBoxItem;
if (ViewportHelper.IsInViewport(item) && (List.Items[i] as string != "Dummy"))
{
FrameworkElement el = item as FrameworkElement;
visibleElements.Add(el);
}
}
int centerItemIdx = visibleElements.Count / 2;
FrameworkElement centerItem = visibleElements[centerItemIdx];
double center = ss.ViewportWidth / 2;
//ss is the SurfaceScrollViewer
Point itemPosition = centerItem.TransformToAncestor(ss).Transform(new Point(0, 0));
double desiredOffset = ss.HorizontalOffset + (center - itemPosition.X);
ss.ScrollToHorizontalOffset(desiredOffset);
centerItem.Focus(); //this also doesn't seem to work, but whatever.
}
The list snaps, but where it snaps seems to be somewhat chaotic. I have a line down the center of the screen, and sometimes it looks right down the middle of the item, but other times it's off to the side or even between items. Can't quite nail it down, but it seems that the first and fourth quartile of the list work well, but the second and third are progressively more off toward the center.
Just looking for some help on how to use positioning in WPF. All of the relativity and the difference between percentage-based coordinates and 'screen-unit' coordinates has me somewhat confused at this point.
After a lot of trial and error I ended up with this:
void ManipCompleted(object sender, ManipulationCompletedEventArgs e)
{
FocusTaker.Focus(); //reset focus
List<FrameworkElement> visibleElements = new List<FrameworkElement>();
for (int i = 0; i < List.Items.Count; i++)
{
SurfaceListBoxItem item = List.ItemContainerGenerator.ContainerFromIndex(i) as SurfaceListBoxItem;
if (ViewportHelper.IsInViewport(item))
{
FrameworkElement el = item as FrameworkElement;
visibleElements.Add(el);
}
}
Window window = Window.GetWindow(this);
double center = ss.ViewportWidth / 2;
double closestCenterOffset = double.MaxValue;
FrameworkElement centerItem = visibleElements[0];
foreach (FrameworkElement el in visibleElements)
{
double centerOffset = Math.Abs(el.TransformToAncestor(window).Transform(new Point(0, 0)).X + (el.ActualWidth / 2) - center);
if (centerOffset < closestCenterOffset)
{
closestCenterOffset = centerOffset;
centerItem = el;
}
}
Point itemPosition = centerItem.TransformToAncestor(window).Transform(new Point(0, 0));
double desiredOffset = ss.HorizontalOffset - (center - itemPosition.X) + (centerItem.ActualWidth / 2);
ss.ScrollToHorizontalOffset(desiredOffset);
centerItem.Focus();
}
This block of code effectively determines which visible list element is overlapping the center line of the list and snaps that element to the exact center position. The snapping is a little abrupt, so I'll have to look into some kind of animation, but otherwise I'm fairly happy with it! I'll probably use something from here for animations: http://blogs.msdn.com/b/delay/archive/2009/08/04/scrolling-so-smooth-like-the-butter-on-a-muffin-how-to-animate-the-horizontal-verticaloffset-properties-of-a-scrollviewer.aspx
Edit: Well that didn't take long. I expanded the ScrollViewerOffsetMediator to include HorizontalOffset and then simply created the animation as suggested in the above post. Works like a charm. Hope this helps someone eventually.
Edit2: Here's the full code for SnapList:
SnapList.xaml
SnapList.xaml.cs
Note that I got pretty lazy as this project went on an hard-coded some of it. Some discretion will be needed to determine what you do and don't want from this code. Still, I think this should work pretty well as a starting point for anyone who wants this functionality.
The code has also changed from what I pasted above; I found that using Windows.GetWindow gave bad results when the list was housed in a control that could move. I made it so you can assign a control for your movement to be relative to (recommended that be the control just above your list in the hierarchy). I think a few other things changed as well; I've added a lot of customization options including being able to define a custom focal point for the list.

3D Hit Testing in WPF

I'm writing a WPF application that displays terrain in 3D.
When I perform hit testing, the wrong 3D point is returned (not the point I clicked on).
I tried highlighting the triangle that was hit (by creating a new mesh, taking the coordinates from the RayMeshGeometry3DHitTestResult object). I see that the wrong triangle gets hit (a triangle is highlighted, but it is not under the cursor).
I'm using a perspective camera with field of view of 60, and the near and far planes are of 3 and 35000 respectively.
Any idea why it might happen and what I can do to solve it?
Let me know if you need any more data.
Edit: This is the code I use to perform the hit testing:
private void m_viewport3d_MouseDown(object sender, MouseButtonEventArgs e)
{
Point mousePos = e.GetPosition(m_viewport3d);
PointHitTestParameters hitParams = new PointHitTestParameters(mousePos);
HitTestResult result = VisualTreeHelper.HitTest(m_viewport3d, mousePos);
RayMeshGeometry3DHitTestResult rayMeshResult = result as RayMeshGeometry3DHitTestResult;
if (rayMeshResult != null)
{
MeshGeometry3D mesh = new MeshGeometry3D();
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex1]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex2]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex3]);
mesh.TriangleIndices.Add(0);
mesh.TriangleIndices.Add(1);
mesh.TriangleIndices.Add(2);
GeometryModel3D marker = new GeometryModel3D(mesh, new DiffuseMaterial(Brushes.Blue));
//...add marker to the scene...
}
}
Something that caught me was that the points were in model coords. I had to transform to world coords. Here is my code that does the hit test (this will return all hits under the cursor, not just the first):
// This will cast a ray from the point (on _viewport) along the direction that the camera is looking, and returns hits
private List<RayMeshGeometry3DHitTestResult> CastRay(Point clickPoint, IEnumerable<Visual3D> ignoreVisuals)
{
List<RayMeshGeometry3DHitTestResult> retVal = new List<RayMeshGeometry3DHitTestResult>();
// This gets called every time there is a hit
HitTestResultCallback resultCallback = delegate(HitTestResult result)
{
if (result is RayMeshGeometry3DHitTestResult) // It could also be a RayHitTestResult, which isn't as exact as RayMeshGeometry3DHitTestResult
{
RayMeshGeometry3DHitTestResult resultCast = (RayMeshGeometry3DHitTestResult)result;
if (ignoreVisuals == null || !ignoreVisuals.Any(o => o == resultCast.VisualHit))
{
retVal.Add(resultCast);
}
}
return HitTestResultBehavior.Continue;
};
// Get hits against existing models
VisualTreeHelper.HitTest(grdViewPort, null, resultCallback, new PointHitTestParameters(clickPoint));
// Exit Function
return retVal;
}
And some logic that consumes a hit:
if (hit.VisualHit.Transform != null)
{
return hit.VisualHit.Transform.Transform(hit.PointHit);
}
else
{
return hit.PointHit;
}
You need to provide the ray to hit test along in order for this to work in 3d. Use the correct overload of VisualTreeHelper.HitTest which takes a Visual3D and a RayHitTestParameters: http://msdn.microsoft.com/en-us/library/ms608751.aspx
Figures out it was a Normalize issue. I shouldn't have normalized the camera's look and up vectors. In the scales I'm using, the distortion is too big for the hit test to work correctly.

Resources