From what I can find, seems like the IsolatedStorage supposed to be permanent unless the user delete it manually. And the following thread says so too:
Is Silverlight isolated storage treated as permanent, or as a cache?
However seems like if I shut down my application and restart it (as I am debugging on debug mode - not sure if that makes a different), the data I stored earlier is gone.
For example, just as pseudocode:
onClick =
let storage = IsolatedStorageSettings.ApplicationSettings
let x = storage.Item key
storage.Add(key, "Some Value")
on first click event, "x" is null (or empty) as expected. Then on the 2nd time around, x would have "Some Value" - this all works fine as expected. However, when I stop debugging, and restart the application, first time around, "x" goes back to null or empty. Tried the same using SiteSettings.
So seems to me IsolatedStorage is not permanent afterall? Just goes with the lifetime of the application?
1- Use the SiteSettings instead of ApplicationSettings
System.IO.IsolatedStorage.IsolatedStorageSettings.SiteSettings("YourKey")
= yourValue
2- You need to save the data after you change them
System.IO.IsolatedStorage.IsolatedStorageSettings.SiteSettings.Save()
Related
I have read over here how to move an application to a specific screen.
In my case I have a variation of this. In this case I want to open for example Todoist on a specific screen. This code below opens Todoist but on my wrong screen.
How can I solve this?
local screens = hs.screen.allScreens()
hs.application.open("Todoist")
local win = hs.application:findWindow("Todoist")
win.moveToScreen(screens[1])
findWindow() is an instance method, so it cannot be called directly as hs.application:findWindow(). To properly call this method, you must create an instance of the hs.application class and then call findWindow() on that instance.
The following snippet should work, although you may need to adjust the wait time (and the screens index). It is generally recommended to use hs.application.watcher to watch for when an app has been launched, rather than using a timer.
local notes = hs.application.open("Notes")
hs.timer.doAfter(1, function()
-- `notes:mainWindow()` will return `nil` if called immediately after opening the app,
-- so we wait for a second to allow the window to be launched.
local notesMainWindow = notes:mainWindow()
local screens = hs.screen.allScreens()
notesMainWindow:moveToScreen(screens[1])
end)
I've been trying to use the log class to capture some strange device-specific failures using local storage. When I went into the Log class and traced the code I noticed what seems to be a bug.
when I call the p(String) method, it calls getWriter() to get the 'output' instance of the Writer. It will notice output is null so it calls createWriter() create it. Since I haven't set a File URL, the following code gets executed:
if(getFileURL() == null) {
return new OutputStreamWriter(Storage.getInstance().createOutputStream("CN1Log__$"));
}
On the Simulator, I notice this file is created and contains log info.
so in my app I want to display the logs after an error is detected (to debug). I call getLogContent() to retrieve it as a string but it does some strange things:
if(instance.isFileWriteEnabled()) {
if(instance.getFileURL() == null) {
instance.setFileURL("file:///" + FileSystemStorage.getInstance().getRoots()[0] + "/codenameOne.log");
}
Reader r = new InputStreamReader(FileSystemStorage.getInstance().openInputStream(instance.getFileURL()));
The main problem I see is that it's using a different file URL than the default writer location. and since the creation of the Writer didn't set the File URL, the getLogContent method will never see the logged data. (The other issue I have is a style issue that a method getting content shouldn't be setting the location for that content persistently for the instance, but that's another story).
As a workaround I think I can just call "getLogContent()" at the beginning of the application which should set the file url correctly in a place that it will retrieve it from later. I'll test that next.
In the mean time, is this a Bug, or is it functionality I don't understand from my user perspective?
It's more like "unimplemented functionality". This specific API dates back to LWUIT.
The main problem with that method is that we are currently writing into a log file and getting its contents which we might currently be in the middle of writing into can be a problem and might actually cause a failure. So this approach was mostly abandoned in favor of the more robust crash protection approach.
Far as I understand, PUT request is not supposed to return any content.
Consider the client wants to run this pseudo code:
x = resource.get({id: 1});
x.field1 = "some update";
resource.put(x);
x.field2 = "another update";
resource.put(x);
(Imagine I have an input control and a button "Save", this allows me to change a part of object "x" shown in an input control, then on button click PUT changes to server, then continue editing and maybe "save" another change to "x")
Following different proposals on how to implement optimistic locking in REST APIs, the above code MUST fail, because version mark (however implemented) for "x" as returned by get() will become stale after put().
Then how do you people usually make it work?
Or do you just re-GET objects after every PUT?
You can use "conditional" actions with HTTP, for example the If-Match header described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
In short: You deliver an ETag with the GET request, and supply this ETag back to the server in the If-Match header. The server will respond with a failure if the resource you are trying to PUT has another ETag. You can also use simple timestamps with the If-Unmodified-Since header.
Of course you will have to make your server code understand conditional requests.
For multiple steps, the PUT can indeed return the new representation, it can therefore include the new ETag or timestamp too. Even if the server does not return the new representation for a PUT, you could still use the timestamp from the response with an If-Unmodified-Since conditional PUT.
Here is probably what I was looking for: https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4
They implicitly say that we CAN return ETag from PUT. Though only in the case server applied the changes as they were given, without any corrections.
However this raises yet another question. In real world app PUT caller will run asynchronously in JS gui, like in my example in the question. So, Save button might be pressed several times with or without entering any changes. If we don't use optimistic locking, then supposed PUT idempotency makes it safe to send another PUT query with each button click, as long as the last one wins (but actually if there were changes then it's not guaranteed, so the question remains).
But with optimistic locking, when first PUT succeeds, it returns updatred ETag, ok? And if there is another PUT request running, still with outdated tag version, that latter request will get 412 and the user will see a message "someone else changed the resource" - but actually it was our former changes.
What do you usually do to prevent that? Disable the Save button until its request is fully completed? What if it times out? Or do you think it's acceptable to see concurrent-change error message if it was a timeout, because the stability is already compromised anyway?
I was reading about the .settings file on msdn and I noticed they give 2 examples of how to set the value of a item in the settings. Now my question is what is the real diffrence between the 2 and when would you use one instead of the other, since to me they seem pretty mutch the same.
To Write and Persist User Settings at Run Time
Access the user setting and assign it a new value, as shown in the following example:
Properties.Settings.Default.myColor = Color.AliceBlue;
If you want to persist changes to user settings between application sessions, call the Save method, as shown in the following code:
Properties.Settings.Default.Save();
The first statement updates the value of the setting in memory. The second statement updates the persisted value in the user.config file on the disk. That second statement is required to get the value back when you restart the program.
It is very, very important to realize that these two statements must be separate and never be written close together in your code. Keeping them close is harakiri-code. Settings tend to implement unsubtle features in your code, making it operate differently. Which isn't always perfectly tested. What you strongly want to avoid is persisting a setting value that subsequently crashes your program.
That's the harakiri angle, if you saved that value then it is highly likely that the program will immediately crash again when the user restarts it. Or in other words, your program will never run correctly again.
The Save() call must be made when you have a reasonable guarantee that nothing bad happened when the new setting value was used. It belongs at the end of your Main() method. Only reached when the program terminated normally.
I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.