I have lot's of polygon drawn. Some of which overlaps. I want to be able to detect a tap inside the polygon and display a dialog saying it was clicked inside the polygon (this polygon should be detected on tap). This kind of property can be seen with ItemizedIconOverlay where we can detect all of the OverlayItem with the tap on the OverlayItem. How to implement this on polygon?
The way I draw and add Polygon is:
MyPolygon myNewPolygon = new MyPolygon(this);
myPolygon.setPoints(Polygon.pointsAsCircle(new GeoPoint(38.948714, -76.831918), 20000.0));
map.getOverlays().add(myPolygon);
myNewPolygon.setPoints(Polygon.pointsAsCircle(new GeoPoint(38.851777, -77.037878), 20000.0));
map.getOverlays().add(myNewPolygon);
There are various answers, depending on what you want to achieve exactly.
Simple case: if you just want to have a bubble opening, then just set an InfoWindow to your Polygon, as described in the tutorial.
If you want anything else: for Polygons, there is no Listener available, as there are for Markers and Polylines (an enhancement to request, maybe?).
So what you can do is sub-class Polygon - as you did with your MyPolygon. Then override onTap method - or most likely onSingleTapConfirmed - and implement the behaviour you need.
Looking at Polygon.onSingleTapConfirmed source may help (hit test for instance).
This is how I'm using it:
class CustomTapPolygon extends Polygon {
private CustomObject tag;
public CustomObject getTag() {
return tag;
}
public void setTag(CustomObject tag) {
this.tag = tag;
}
#Override
public boolean onSingleTapUp(MotionEvent e, MapView mapView) {
if (e.getAction() == MotionEvent.ACTION_UP && contains(e)) {
// YOUR CODE HERE
return true;
}
return super.onSingleTapUp(e, mapView);
}
}
Have you tried osmbonuspack yet? I know for certain that there's an onclick listener for polygons in it.
Related
In Qt / PySide2, is there such a thing as a Qt widget that simply passes through to a wrapped widget, without adding any extra layers of layout etc.
I'm coming from a web frontend background, so my mental model is of a React container component that adds some behavior but then simply renders a wrapped presentational component.
However, there doesn't seem to be a way of doing this sort of thing in Qt without at least creating a layout in the wrapping widget, even if that layout only contains one widget. I could see that this could lead to multiple layers of redundant layout, which could be inefficient.
I acknowledge that it may be better to not try to replicate React patterns in Qt, so any suggestions of equivalent but more idiomatic patterns would also be welcome.
First I have to ask, what is the point of creating a container widget to just hold one widget, with no extra padding, layouts, or other "overhead?" Why not just show the widget which would be contained?
Second, nothing says you must have a QLayout inside a QWidget. The layout simply moves any contained widgets around using QWidget::setGeometry() (or similar) on the child widget(s). It's trivial to implement a QWidget which sizes a child widget to match its own size, though it's fairly pointless because that's what QLayout is for. But I have included such an example below (C++, sorry)
A top-level QLayout set on a QWidget has default content margins (padding around the contained widget(s)). This can easily be removed with QLayout::setContentMargins(0, 0, 0, 0) (as mentioned in a previous comment).
"No-layout" "passthrough" QWidget:
#include <QWidget>
class PassthroughWidget : public QWidget
{
Q_OBJECT
public:
PassthroughWidget(QWidget *child, QWidget *parent = nullptr) :
QWidget(parent),
m_child(child)
{
if (m_child)
m_child->setParent(this); // assume ownership
}
protected:
void resizeEvent(QResizeEvent *e) override
{
QWidget::resizeEvent(e);
if (m_child)
m_child->setGeometry(contentsRect()); // match child widget to content area
}
QWidget *m_child; // Actually I'd make it a QPointer<QWidget> but that's another matter.
}
ADDED: To expand on my comments regarding being a widget vs. having (or managing) widget(s).
I just happen to be working on a utility app which makes use of both paradigms for a couple of parts. I'm not going to include all the code, but hopefully enough to get the point across. See screenshot below for how they're used. (The app is for testing some painting and transform code I'm doing, quite similar to (and started life as) the Transformations Example in Qt docs.)
What the code parts below actually do isn't important, the point is how they're implemented, again specifically meant to illustrate the different approaches to a "controller" for visual elements.
First example is of something being a widget, that is, inheriting from QWidget (or QFrame in this case) and using other widgets to present a "unified" UI and API. This is an editor for two double values, like for a size width/height or coordinate x/y value. The two values can be linked so changing one will also change the other to match.
class ValuePairEditor : public QFrame
{
Q_OBJECT
public:
typedef QPair<qreal, qreal> ValuePair;
explicit ValuePairEditor(QWidget *p = nullptr) :
QFrame(p)
{
setFrameStyle(QFrame::NoFrame | QFrame::Plain);
QHBoxLayout *lo = new QHBoxLayout(this);
lo->setContentsMargins(0,0,0,0);
lo->setSpacing(2);
valueSb[0] = new QDoubleSpinBox(this);
...
connect(valueSb[0], QOverload<double>::of(&QDoubleSpinBox::valueChanged),
this, &ValuePairEditor::onValueChanged);
// ... also set up the 2nd spin box for valueSb[1]
linkBtn = new QToolButton(this);
linkBtn->setCheckable(true);
....
lo->addWidget(valueSb[0], 1);
lo->addWidget(linkBtn);
lo->addWidget(valueSb[1], 1);
}
inline ValuePair value() const
{ return { valueSb[0]->value(), valueSb[1]->value() }; }
public slots:
inline void setValue(qreal value1, qreal value2) const
{
for (int i=0; i < 2; ++i) {
QSignalBlocker blocker(valueSb[i]);
valueSb[i]->setValue(!i ? value1 : value2);
}
emit valueChanged(valueSb[0]->value(), valueSb[1]->value());
}
inline void setValue(const ValuePair &value) const
{ setValue(value.first, value.second); }
signals:
void valueChanged(qreal value1, qreal value2) const;
private slots:
void onValueChanged(double val) const {
...
emit valueChanged(valueSb[0]->value(), valueSb[1]->value());
}
private:
QDoubleSpinBox *valueSb[2];
QToolButton *linkBtn;
};
Now for the other example, using a "controller" QObject which manages a set of widgets, but doesn't itself display anything. The widgets are available to the managing application to place as needed, while the controller provides a unified API for interacting with the widgets & data. Controllers can be created or destroyed as needed.
This example manages a QWidget which is a "render area" for doing some custom painting, and a "settings" QWidget which changes properties in the render area. The settings widget has further sub-widgets, but these are not directly exposed to the controlling application. In fact it also makes use of ValuePairEditor from above.
class RenderSet : public QObject
{
Q_OBJECT
public:
RenderSet(QObject *p = nullptr) :
QObject(p),
area(new RenderArea()),
options(new QWidget())
{
// "private" widgets
typeCb = new QComboBox(options);
txParamEdit = new ValuePairEditor(options);
...
QHBoxLayout *ctrLo = new QHBoxLayout(options);
ctrLo->setContentsMargins(0,0,0,0);
ctrLo->addWidget(typeCb, 2);
ctrLo->addWidget(txParamEdit, 1);
ctrLo->addLayout(btnLo);
connect(txParamEdit, SIGNAL(valueChanged(qreal,qreal)), this, SIGNAL(txChanged()));
}
~RenderSet() override
{
if (options)
options->deleteLater();
if (area)
area->deleteLater();
}
inline RenderArea *renderArea() const { return area.data(); }
inline QWidget *optionsWidget() const { return options.data(); }
inline Operation txOperation() const
{ return Operation({txType(), txParams()}); }
inline TxType txType() const
{ return (typeCb ? TxType(typeCb->currentData().toInt()) : NoTransform); }
inline QPointF txParams() const
{ return txParamEdit ? txParamEdit->valueAsPoint() : QPointF(); }
public slots:
void updateRender(const QSize &bounds, const QPainterPath &path) const {
if (area)
...
}
void updateOperations(QList<Operation> &operations) const {
operations.append(txOperation());
if (area)
...
}
signals:
void txChanged() const;
private:
QPointer<RenderArea> area;
QPointer<QWidget> options;
QPointer<QComboBox> typeCb;
QPointer<ValuePairEditor> txParamEdit;
};
There are two ways of managing widgets in Qt: by layouts or by parentness. Have you tried to use the 'parent' approach?
Docs says:
...The base class of everything that appears on the screen, extends the parent-child relationship. A child normally also becomes a child widget, i.e. it is displayed in its parent's coordinate system and is graphically clipped by its parent's boundaries.
So, basically, if you use setParent for containing widgets, no layouts need to be created.
I have used QFrame in a similar manner, with all properties set to 0 and Shape set to QFrame::NoFrame. It is very useful as a dumb container for the actual widgets that end up doing the heavy lifting, QStackedWidget is one user of this. Quote from the docs:
The QFrame class can also be used directly for creating simple placeholder frames without any contents.
However, I'm not sure it is 100% what you are looking for, as I'm not familiar with the React method you are outlining. Also not sure how far you can reasonably get without using layouts.
i am new to wpf and i need to open up a pop up a new window on grid row click which contains lots of data and controls on it.i am confused with the correct approach. i am using mvvm pattern.should i make a window control or user control or something else. and how to open that pop up inside a function. please help with example
If I need to display a new Window in my MVVM-Application I use the following approach:
At first I have an interface with a method to show the new dialog:
internal interface IDialogManager
{
void DisplayData(object data);
}
And an implementation like:
internal class DialogManager : IDialogManager
{
public void DisplayData(object data)
{
LotOfDataViewModel lotOfDataViewModel = new LotOfDataViewModel(data);
LotOfDataView lotOfDataView = new LotOfDataView
{
DataContext = lotOfDataViewModel
};
lotOfDataView.ShowDialog();
}
}
LotOfDataViewModel and LotOfDataView are the new Dialog where you want to show your data.
In your actual ViewModel you introduce a new property like:
private IDialogManager dialogManager;
private IDialogManager DialogManager
{
get { return dialogManager ?? (dialogManager = new DialogManager()); }
}
And the you can show your large data with:
DialogManager.DisplayData(myData);
I have just started using the Mixed Reality Toolkit (former Holotoolkit) and I am trying to use a slider.
So I made a scene with a 3DTextPrefab, a Button and a Slider.
I wrote a script and attach it to the 3DTextPrefab. This script is
public class Clicker : MonoBehaviour {
public GameObject ObjectToShow;
public float waitTime;
private void Awake()
{
ObjectToShow.SetActive(false);
}
public void Click()
{
TextMesh tm = ObjectToShow.GetComponent("TextMesh") as TextMesh;
tm.text = "Hello with parameter" + waitTime;
ObjectToShow.SetActive(true);
StartCoroutine(HideAfterTimeout());
}
public void setWaitTime(float t)
{
waitTime = t;
}
IEnumerator HideAfterTimeout()
{
// yield return new WaitForSeconds(0.1f);
yield return new WaitForSeconds(waitTime);
ObjectToShow.SetActive(false);
}
}
In the button, there is the "Interactive" script (by default), so I added the 3dTextPrefab as an object to the OnSelectedEvents list and selected its Click function.
Doing this every time I click on the button, it calls the click function of the Prefab script. So far so goo.
I tried to do something similar with the slider so I added the prefab as an object to the Slider Gesture Control script's OnUpdateEvent and selected its setWaitTime function.
My problem is, the setWaitTime function has a parameter and I can see that in the inspector. This parameter has to be the actual value of the slider.
How do you get the actual value of the slider to put it there?
The slider should have a value property that you can read from. Add the slider to your script, assign your slider to the slot in the inspector and then you can access the value property from inside of your script.
I have made a quiz game and I want to be able to show an image right after a user answered correctly. The problem is i have a bunch of questions and images that it becomes tedious to set visibility for each and every image. How do I optimize this procedure. I was thinking of maybe placing the images in an array but i don't really know if its possible or to make it show up in the place that i want.
As I understood, the problem is that you have N images and you iterate each time over the whole set to set the visibility. In your case I would (as you suggested) create an array of those images and few helper functions. Some basic example:
private var imageVector: Vector.<DisplayObject>; // this vector holds all your images
private var currentImage: DisplayObject; // the image that is shown currently
private function createAndFillImages():void {
imageVector = new Vector.<DisplayObject>();
imageVector.push(image1);
imageVector.push(image2);
//... etc. it depends on how your images are presented.
}
private function onAnswerGiven():void {
const img: DisplayObject = ... // pick the right image here
showImage(img)
}
private function showImage(img: DisplayObject):void {
if (currentImage != null) currentImage.visible = false;
currentImage = img;
// ... do the positioning here
currentImage.visible = true;
}
I'm creating a WPF MVVM app using Caliburn Micro. I have a set of buttons in a menu (Ribbon) that live in the view for my shell view model, which is a ScreenConductor. Based on the currently active Screen view model, I would like to have the ribbon buttons be disabled/enabled if they are available for use with the active Screen, and call actions or commands on the active Screen.
This seems like a common scenario. Is there a pattern for creating this behavior?
Why don't you do the reverse thing, instead of checking which commands are supported by the current active screen, let the active screen populate the menu or ribbon tab with all the controls that it supports, (i would let it inject its own user control which might just be a complete menu or a ribbon tab all by itself), this will also enhance the user experience as it will only show the user the controls that he can work with for the current active screen.
EDIT: Just looking at your question again and I'm thinking that this is much simpler than it looks
The only issue I can see you having is that a lack of a handler (and guard) method on a child VM will mean that buttons that don't have an implementation on the currently active VM will still be enabled.
The default strategy for CM is to try and find a matching method name (after parsing the action text) and if one is not found, to leave the button alone. If you were to customise that behaviour so that the default is for buttons to be disabled, you could easily get it working by just implementing the command buttons in your shell, making sure to set the command target to the active item:
In the shell define your buttons, making sure they have a target that points to the active child VM
<Button cal:Message.Attach="Command1" cal:Action.TargetWithoutContext="{Binding ActiveItem}" />
Then just implement the method in your child VM as per usual
public void Command1() { }
and optionally a CanXX guard
public bool CanCommand1
{
get
{
if(someCondition) return false;
return true;
}
}
Assuming you don't get much more complex than this, it should work for you
I'm going to have a quick look at the CM source and see if I can come up with something that works for this
EDIT:
Ok you can customise the ActionMessage.ApplyAvailabilityEffect func to get the effect you want - in your bootstrapper.Configure() method (or somewhere at startup) use:
ActionMessage.ApplyAvailabilityEffect = context =>
{
var source = context.Source;
if (ConventionManager.HasBinding(source, UIElement.IsEnabledProperty))
{
return source.IsEnabled;
}
if (context.CanExecute != null)
{
source.IsEnabled = context.CanExecute();
}
// Added these 3 lines to get the effect you want
else if (context.Target == null)
{
source.IsEnabled = false;
}
// EDIT: Bugfix - need this to ensure the button is activated if it has a target but no guard
else
{
source.IsEnabled = true;
}
return source.IsEnabled;
};
This seems to work for me - there is no target for methods which couldn't be bound to a command, so in that case I just set IsEnabled to false. This activates buttons only when a method with a matching signature is found on the active child VM - obviously give it a good test before you use it :)
Create methods and accompanying boolean properties for each of your commands on your shell view model. (See code below for an example.) Caliburn.Micro's conventions will wire them up to the buttons for you automatically. Then simply raise property changed events for the boolean properties when you change views to have them be re-evaluated.
For example, let's say you have a Save button. The name of that button in your xaml would be Save, and in your view model, you would have a Save method along with a CanSave boolean property. See below:
public void Save()
{
var viewModelWithSave = ActiveItem as ISave;
if (viewModelWithSave != null) viewModelWithSave.Save();
}
public bool CanSave { get { return ActivateItem is ISave; } }
Then, in your conductor, whenever you change your active screen, you would call NotifyOfPropertyChange(() => CanSave);. Doing this will cause your button to be disabled or enabled depending upon if the active screen is capable of dealing with that command. In this example, if the active screen doesn't implement ISave, then the Save button would be disabled.
I would use the Caliburn.Micro event aggregation in this scenario, as follows:
Create a class named ScreenCapabilities with a bunch of Boolean attributes (e.g. CanSave, CanLoad, etc.)
Create a message named ScreenActivatedMessage with a property of type ScreenCapabilities
Create a view model for your ribbon that subscribes to (handles) the ScreenActivatedMessage
In the ribbon view model's Handle method, set the local CanXXX properties based on the supplied ScreenCapabilities.
It would look something like this (code typed by hand, not tested):
public class ScreenCapabilities
{
public bool CanSave { get; set; }
// ...
}
public class ScreenActivatedMessage
{
public ScreenCapabilities ScreenCapabilities { get; set; }
// ...
}
public class RibbonViewModel : PropertyChangedBase, IHandle<ScreenActivatedMessage>
{
private bool _canSave;
public bool CanSave
{
get { return _canSave; }
set { _canSave = value; NotifyPropertyChanged(() => CanSave); }
}
// ...
public void Handle(ScreenActivatedMessage message)
{
CanSave = message.ScreenCapabilities.CanSave;
// ...
}
}
Then, somewhere appropriate, when the screen changes, publish the message. See see Caliburn.Micro wiki for more info.
Define a property (let's say ActiveScreen) for the active screen in the shell view model.
And let's assume you have properties for the each button such as DeleteButton, AddButton.
Screen is a viewmodel for the screens.
private Screen activeScreen;
public Screen ActiveScreen
{
get
{
return activeScreen;
}
set
{
activeScreen= value;
if (activeScreen.Name.equals("Screen1"))
{
this.AddButton.IsEnabled = true;
this.DeleteButton.IsEnabled = false;
}
if else (activeScreen.Name.equals("Screen2"))
{
this.AddButton.IsEnabled = true;
this.DeleteButton.IsEnabled = true;
}
NotifyPropertyChanged("ActiveScreen");
}
}