We have a button that fires a command which goes to the server to do some validation. This is done asynchronously and if the validation is okay (i.e. the user has the correct permission), I want to show the SaveFileDialog.
However, this is not a user initiated action which means calling the SaveFileDialog.ShowDialog() method raises a "Dialog must be user initiated" exception.
Is there any way to make this work the way I want?
To other option is to launch the SaveFileDialog and make the request after the file has been selected. Not ideal but it works.
JD.
There is no work around after all it would be a pointless restriction if there were a work around.
I think your alternative design choice make sense. You might consider using a busy indicator with the message "Validating..." or some such whilst the async validation occurs then do what ever it is you would have done once the asyc operation completes.
Related
Far as I understand, PUT request is not supposed to return any content.
Consider the client wants to run this pseudo code:
x = resource.get({id: 1});
x.field1 = "some update";
resource.put(x);
x.field2 = "another update";
resource.put(x);
(Imagine I have an input control and a button "Save", this allows me to change a part of object "x" shown in an input control, then on button click PUT changes to server, then continue editing and maybe "save" another change to "x")
Following different proposals on how to implement optimistic locking in REST APIs, the above code MUST fail, because version mark (however implemented) for "x" as returned by get() will become stale after put().
Then how do you people usually make it work?
Or do you just re-GET objects after every PUT?
You can use "conditional" actions with HTTP, for example the If-Match header described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
In short: You deliver an ETag with the GET request, and supply this ETag back to the server in the If-Match header. The server will respond with a failure if the resource you are trying to PUT has another ETag. You can also use simple timestamps with the If-Unmodified-Since header.
Of course you will have to make your server code understand conditional requests.
For multiple steps, the PUT can indeed return the new representation, it can therefore include the new ETag or timestamp too. Even if the server does not return the new representation for a PUT, you could still use the timestamp from the response with an If-Unmodified-Since conditional PUT.
Here is probably what I was looking for: https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4
They implicitly say that we CAN return ETag from PUT. Though only in the case server applied the changes as they were given, without any corrections.
However this raises yet another question. In real world app PUT caller will run asynchronously in JS gui, like in my example in the question. So, Save button might be pressed several times with or without entering any changes. If we don't use optimistic locking, then supposed PUT idempotency makes it safe to send another PUT query with each button click, as long as the last one wins (but actually if there were changes then it's not guaranteed, so the question remains).
But with optimistic locking, when first PUT succeeds, it returns updatred ETag, ok? And if there is another PUT request running, still with outdated tag version, that latter request will get 412 and the user will see a message "someone else changed the resource" - but actually it was our former changes.
What do you usually do to prevent that? Disable the Save button until its request is fully completed? What if it times out? Or do you think it's acceptable to see concurrent-change error message if it was a timeout, because the stability is already compromised anyway?
I want to log if a customer tries to force close the application. I'm aware of having no chance to catch a process kill. But it should be possible through the main form closing event to get informed about the 'CloseReason.TaskManagerClosing' reason.
But any tests I did under Windows 8.1 I always got a CloseReason.UserClosing reason. But in this case (compared to a normals CloseReason.UserClosing) I've about 0.2s to run user code afterwards my program is killed!
Is this a new behavior in Windows 8.1?
Yes, I see this. And yes, this is a Windows change, previous versions of Task Manager sent the window the WM_CLOSE notification directly. I now see it issue the exact same command that's issued when you close the window with the Close button (WM_SYSCOMMAND, SC_CLOSE). Or press Alt+F4 or use the system menu. So Winforms can no longer tell the difference between the Task Manager and the user closing the window and you do get CloseReason.UserClosing.
What happens next is expected, if you don't respond to the close command quickly enough then Task Manager summarily assassinates your program with TerminateProcess().
Do keep in mind that trying to save data when the user aborts your program through Task Manager is a bad practice. Your user will normally use this if your program is malfunctioning, you can't really trust the data anymore and you risk writing garbage. This is now compounded by your saving code being aborted, high odds for a partially written file or dbase data that isn't usable anymore.
There is no simple workaround for this, the odds that Windows is going to be patched to restore old behavior are very close to zero. It is very important that you save your data in a transactional way so you don't destroy valuable data if the saving code is aborted. Use File.Replace() for file data, use a dbase transaction for dbase writes.
An imperfect way to detect this condition is by using the Form.Deactivate and Activate events. If you saw the Deactivate event and the FormClosing event fires then reasonable odds that another program is terminating yours.
But the normal way you deal with this is the common one, if the user ends the program without saving data then you display a dialog that asks whether to save data. Task Manager ensures that this doesn't go further than that.
Another solution for determining when the Task Manager is closing the program is by checking if the main form, or any of its controls, has focus. When you close via the Task Manager, the application is not focused, whereas if you close via the close button or Alt+F4, the application does have focus. I used a simple if check:
private void MyForm_Closing(object sender, FormClosingEventArgs e)
{
if (this.ContainsFocus)
{
// Put user close code here
}
}
Due to a major time constraint, need to stick with invoking rasphone.exe from my c program, rather than best approach of using RAS API's. From my code, when the rasphone pop's up a dialler window to the user, if the user click's on cancel button, i have to stop blocking another set of code.
Ultimately, i need to handle the rasphone returns to control my code flow based on the Success/Failure-Cancel. How to do this? Also, is there any other possibility for silent dialling without any popup?I hope no, as its discussed.
I'm looking for a good way to implement a truly modal dialog in Silverlight 5. Every example that I find that claims to create a modal dialog really isn't modal in that the calling code waits until the dialog is closed.
I realize this is a challenge because we can't actually block the UI thread because it must be running in order for the dialog (ChildWindow) to function correctly. But, with the addition of the TPL in SL5 and the higher level of adoption Silverlight has seen over the past few years, I'm hoping someone has found a way around this.
A good representative scenario I am trying to solve is an action (say clicking a button or menu item) that displays a login dialog and must wait for the login to complete before proceeding.
Our specific business case (whether logical or not) is that the application does not require user authentication; however, certain functions require "Manager" access. When the function is accessed (via button click or menu item selected, etc), and the current user is not a manager, we display the login dialog. When the dialog closes, we check the user's authorization again. If they are not authorized, we display a nice message and reject the action. If they are authorized, we continue to perform the action which typically involves changing the current view to something new where the user can do whatever it is they requested.
For the sake of closure...
What I ended up with is a new TPL-enabled DialogService with methods like:
public Task<Boolean?> ShowDialog<T>()
where T : IModalWindow
The method creates an instance of the dialog (ChildWindow), attaches a handler to the Closed event and calls the Show method. Internally, it uses a TaskCompletionSource<Boolean?> to return a Task to the caller. In the Closed event handler, the DialogResult is passed to the TrySetResult() method of the TaskCompletionSource.
This lets me display the dialog in a typical async way:
DialogService.ShowDialog<LoginDialog>().ContinueWith(task =>
{
var result = task.Result;
if ((result == true) && UserHasPermission())
{
// Continue with operation
}
else
{
// Display unauthorized message
}
}, TaskScheduler.FromCurrentSynchronizationContext());
Or, I could block using the Task.Wait() method - although this is problematic because, as I mentioned in the original post, it blocks the UI thread, so even the dialog is frozen.
Still not a true modal dialog, but a little bit closer to the behavior I am looking for. Any improvements or suggestions are still appreciated.
I am designing an application which has the potential to hang while waiting for data from servers (either Database or internet) the problem is that I don't know how best to cope with the multitude of different places things may take time.
I am happy to display a 'loading' dialog to the user while access is happening but ideally I don't want this to flick up and disappear for short running operations.
Microsoft word appears to handle this quite nicely, as if you click a button and the operation takes a long time, after a few seconds you get a 'working..' dialog. The operation is still synchronous and you can't interrupt the operation. However if the same operation happens quickly you obviously don't get the dialog.
I am happy(ish) to devise some generic background worker thread handler and 99% of my data processing is already done in static atomic methods, but I would like to go best practice on this one if I can.
If anyone has patterns, code or suggestions I welcome them all
Cheers
I would definitely think asynchronously using a pattern with 2 events. The first "event" is that you actually got your data from wherever/whenever you had to wait for it. The second event is a delay timer. If you get your data before this timer pops, all is well and good. If not, THEN you pop up your "I'm busy" and allow them to cancel the request. Usually cancel just mean "ignore" when you finally get the response.
Microsoft word appears to handle this quite nicely, as if you click a button and the operation takes a long time, after a few seconds you get a 'working..' dialog. The operation is still synchronous and you can't interrupt the operation. However if the same operation happens quickly you obviously don't get the dialog.
If this is the behavior your want...
You could handle this, fairly easily, by wrapping a class around BackgroundWorker. Just time the start of the DoWork event, and the time to the first progress report. If a certain amount of time passes, you could show your dialog - otherwise, block (since it's a short process).
That being said, any time you're doing work that can be processed asynchronously, I'd recommend doing it that way. It's much nicer to never block your UI for a noticable interval, even if it's short. This becomes much simpler in .NET 4 (or 3.5 with Rx framework) by using the task parallel library.
Ideally you should be running any IO or non-UI processing either in a background thread or asynchronously to avoid locking up the UI.