In Silverlight, How can we persist data between different pages and controls.
In our application, we plan to have central data object which is to track the user changes from different pages and controls.
How can we have to achieve this?
Like you mention, you could use an application level (global) data object - implement it as a singleton and it will be available to all pages/controls. With this you can add properties to the global object and track state with it. You may encounter issues if you have multiple threads accessing the same property at the same time, either work out a synchronization method or avoid situations where two threads could compete to set the same value.
Another possible option is to use IsolatedStorage. This is more of a data store but is very useful for keeping data between different runs of your application (i.e. you can save stuff into it for use when the user shuts down your app and then runs it the next day).
Related
I will be storing many small data strings in both scoped model and shared preferences. My question is, in order to retrieve this data back are there any significant speed differences in retrieving this data from either of these sources?
Since I will be doing many "sets" and "gets" I would like to know if anybody has seen any differences in performance using one more than another.
I understand Shared preferences is persistent and scoped model is not however after the app is loaded, the data is synced and I would rather access the data from the fastest source.
Firstly, understand that they are not alternatives. You will likely want to back certain parts of your model using shared preferences and this can be done behind scoped model (or BLoC etc). Note that simply updating a shared preference will not trigger a rebuild, which is why you should use one of the shared state patterns and then have that update those items it wants to persist to shared preferences.
Shared preferences is actually implemented as an in memory map that triggers a background write to storage on each update. So 'reads' from shared preferences are inexpensive.
I would like to use db4o for persisting my business object in Prism aplication. How should I maintain IObjectContainer lifetime? As I know from documentation, when I load object with one container I should save it with the same one. So maybe some kind of singleton scope should be right. But doesn't container keep reference to every object which goes through it and because of this doesn't it cause something like memory leak?
I read something about Conversation per Business Transaction, but it was for nHibernate and I guess nHibernate's session and db4o's container are totally different things.
Just for sure, I am talking about desktop application with embedded db4o. So, no server/client.
For desktop applications it's usually easier to have a global container. That way you just can store / update objects without any issues. So singleton scope should be the right one.
The db4o container only holds weak references to objects. That means it should never prevent objects from being collected.
I my desktop App with db4o we have a single object container. After each logical operation we just commit to persist all changes.
I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again:
In the UI, set a loading message or loading progress bar in place of where the content is
Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request
If the content can not be acquired (no network connection, etc), display an error message
If the content is acquired, display it in the UI
Keep the content in main memory for subsequent queries
The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source.
I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified:
Where to display the content in the UI
The UI elements to show while loading, on failure, and on success
The URI of the HTTP request
How to parse the HTTP response into the data structure that will kept in memory
The location of the file in isolated storage, if it exists
How to parse the file contents into the data structure that will be kept in memory
It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have.
Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections.
My questions are:
Has something like this already been implemented?
Are my thoughts about the topic above fundamentally wrong in some way?
Here is a design I'm thinking of:
There are two components, a View and a Model.
The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI.
The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache.
How does that sound?
I took some time to have a good read of your requirements and noted some thoughts to offer as a sounding board.
Firstly, for repetetive tasks with common behaviour this is definitely the way to approach it. You are not alone in thinking about this problem.
People doing a bunch of this sort of thing may have created similar abstractions however, to my knowledge none have been publicly released.
How far you go with it may depend if you intend it to be just for your own use and for those with very similar requirements or whether you want to handle more general cases and make a product that is usable by a very wide audience.
I'm going to assume the former, but that does not preclude the possibility of releasing it as an open source project that can be developed further and/or forked.
By not trying to cater for all possibilities you can make certain assumptions about the nature of the using implementation and in particular UI design choices.
I think overall your thinking in the right direction. While reading some of your high level thoughts I considered some things could be simplified (a good thing) and at the same time delivering a compeling UI.
On your initial points.
You could just assume a performance isindeterminate progressbar is being passed in.
Do this if it's important to you, but you could be buying yourself into some complexity here handling different caching requirements - variance in duration or dirty handling. Perhaps sufficient to lean on the platforms inbuilt caching of urls (which some people have found gets in their way).
Handle network connectivity, yep this is repetitive and somewhat intricate. A perfect candidate for a general solution.
Update UI... arguably better to just return data and defer decisions regarding presentation and format of data to your individual clients.
Content in main memory - see above on caching.
On your potential inputs.
Where to display content - see above re data and defer presentation choices to client.
I would go with a UI element for the progress indicator, again a performant progress bar. Regarding communication of failure I would consider implementing this in a Completed event which you publish. Then through parameters you can communicate the result and defer handling to the client to place that result in some presentation control/log/whatever. This is consistent with patterns used by the .Net Framework.
URI - yes, this gets passed in.
How to parse - passing in a delegate to convert a stream or string into an object whose type can be decided by the client makes sense.
Loc of cache - you could pass this if generalising this matters, or hardcode it's path. It would be more useful to others if passed in (consider if you handle folders/creation).
On the implementation.
You could go with a UserControl, if it works for you to be bound by that assumption. It would be more flexible though, and arguably equally simple/elegant, to push presentation back on the client for both the data display and status messages and control hide/display of the progress bar as passed in.
Perhaps you would go so far as to assume the status messages would always be displayed in a textblock (if passed) and shift that housekeeping from each of your clients into your generic class.
I suspect you will benefit from not coupling the data format and presentation still.
Tombstone handling.. I would recommend some testing on the platforms in built caching of URLs here and see if you can identify whether it's durations/dirty conditions work for your general cases.
Hopefully this gives you some things to think about and some reassurance you're heading down the right path. There are many ways you could go about this. Which is the best path ultimately will be driven by your goals.
I'm developing a WP7 application which is basically a client of an existing REST API. The server returns data in JSON. With the help of the library JSON.NET (http://json.codeplex.com/) I was able to deserialize it directly to my .NET C# classes.
I store locally the data to handle offline scenario of my application and also to prevent the call on the server each time the user launch the application. I provide two ways to refresh the data: manually and/or after a period of time. To store the data I use Sertling (http://sterling.codeplex.com/), it’s a simple but easy to use local database for Silverlight/WP7.
The biggest challenge is to handle the asynchronous communication with the server. I provide clear UI feedbacks (Progressbar and /or loading wheel) to let know to the user what’s going on.
On a side note I’m using MVVM Light toolkit and SL Unit Testing to do integration test View Model => my local Client code => Server. (http://code.google.com/p/nunit-silverlight/wiki/NunitTestsWp7)
I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally?
For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right?
If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated.
I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data.
Update:
OK so I have changed my app as follows:
All contexts are created in Using Blocks to dispose of them after they are no longer needed.
When loaded, all entities are detatched from context before it is disposed.
A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method.
After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null).
This has let to the current problem:
When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem.
My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work.
A global static context is rarely the right answer. Consider what happens if the database is reset during the execution of this application - your SQL connection is gone and all subsequent requests using the static context will fail.
Recommend you find a way to have a much shorter lifetime for your entity context - open it, do some work, dispose of it, ...
As far as putting your different objects in the same EDMX, that's almost certainly the right answer if they have any relationships between objects you'll want them in the same EDMX.
As for reloading - the user should never have to do this. Behind the scenes you can open a new context, reloading the current version of the object from the database, applying the changes they made in the UI annd then saving it back.
You might want to look at detached entities also, and beware of optimistic concurrency exceptions when you try to save changes and someone else has changed the same object in the database.
Good question, Cory. Thumb up from me.
Entity Framework gives you a free choice - you can either instanciate multiple contexts or have just one, static. It will work well in both cases and yes, both solutions are safe. The only valuable advice I can give you is: experiment with both, measure performance, delays etc and choose best one for you. It's fun, believe me :)
If this is going to be a really massive application with tons of concurrent connections I would advise using one static context or one static, core context and just few additional ones just to support the main one. But, as I wrote just few lines above - it's up to your requirements which solution is better for you.
I especially liked this part of your question:
I just don't want one of those
applications where the user has to hit
reload a dozen times in different
parts of the app to get the new data.
WPF is a really, really powerful tool and basically times when users have to press buttons to refresh data are gone forever. WPF gives you a really wide range of asynchronous, multithreading tools such as Dispatcher class or Background worker to gently refresh desired data in the background. This is really great, because not only you don't have to worry about pressing various buttons, but also background threads don't block UI, so data is refreshed transparently from user's point of view.
WPF together with Entity Framework are really worth the effort of learning - please feel free to ask if you have any further concerns.
I have a internet application that supports offline mode where users might create data that will be synchronized with the server when the user comes back online. So because of this I'm using UUID's for identity in my database so the disconnected clients can generate new objects without fear of using an ID used by another client, etc. However, while this works great for objects that are owned by this user there are objects that are shared by multiple users. For example, tags used by a user might be global, and there's no possible way the remote database could hold all possible tags in the universe.
If an offline user creates an object and adds some tags to it. Let's say those tags don't exist on the user's local database so the software generates a UUID for them. Now when those tags are synchronized there would need to be resolution process to resolve any overlap. Some way to match up any existing tags in the remote database with the local versions.
One way is to use some process by which global objects are resolved by a natural key (name in the case of a tag), and the local database has to replace it's existing object with this the one from the global database. This can be messy when there are many connections to other objects. Something tells me to avoid this.
Another way to handle this is to use two IDs. One global ID and one local ID. I was hoping using UUIDs would help avoid this, but I keep going back and forth between using a single UUID and using two split IDs. Using this option makes me wonder if I've let the problem get out of hand.
Another approach is to track all changes through the non-shared objects. In this example, the object the user assigned the tags. When the user synchronizes their offline changes the server might replace his local tag with the global one. The next time this client synchronizes with the server it detects a change in the non-shared object. When the client pulls down that object he'll receive the global tag. The software will simply resave the non-shared object pointing it to the server's tag and orphaning his local version. Some issues with this are extra round trips to fully synchronize, and extra data in the local database that is just orphaned. Are there other issues or bugs that could happen when the system is in between synchronization states? (i.e. trying to talk to the server and sending it local UUIDs for objects, etc).
Another alternative is to avoid common objects. In my software that could be an acceptable answer. I'm not doing a lot of sharing of objects across users, but that doesn't mean I'd NOT be doing it in the future. Which means choosing this option could paralyze my software in the future should I need to add these types of features. There are consequences to this choice, and I'm not sure if I've completely explored them.
So I'm looking for any sort of best practice, existing algorithms for handling this type of system, guidance on choices, etc.
Depend on what application semantics you want to offer to users, you may pick different solutions. E.g., if you are actually talking about tagging objects created by an offline user with a keyword, and wanting to share the tags across multiple objects created by different users, then using "text" for the tag is fine, as you suggested. Once everyone's changes are merged, tags with the same "text", like, say "THIS IS AWESOME", will be shared.
There are other ways to handle disconnected updates to shared objects. SVN, CVS, and other version control system try to resolve conflicts automatically, and when cannot, will just tell user there is a conflict. You can do the same, just tell user there have been concurrent updates and the users have to handle resolution.
Alternatively, you can also log updates as units of change, and try to compose the changes together. For example, if your shared object is a canvas, and your application semantics allows shared drawing on the same canvas, then a disconnected update that draws a line from point A to point B, and another disconnected update drawing a line from point C to point D, can be composed. In this case, if you keep those two updates as just two operations, you can order the two updates and on re-connection, each user uploads all its disconnected operations and applies missing operations from other users. You probably want some kind of ordering rule, perhaps based on version number.
Another alternative: if updates to shared objects cannot be automatically reconciled, and your application semantics does not support notifying user and asking user to resolve conflicts due to disconnected updates, then you can also use version tree to handle this. Each update to a shared object creates a new version, with past version as the parent. When there are disconnected updates to a shared object from two different users, two separate children versions/leaf nodes result from the same parent version. If your application's internal representation of state is this version tree, then your application's internal state remains consistent despite disconnected updates, and you can handle the two branches of the version tree in some other way (e.g. letting user know of branches and create tools for them to merge branches, as in source control systems).
Just a few options. Hope this helps.
Your problem is quite similar to versioning systems like SVN. You could take example from those.
Each user would have a set of personal objects plus any shared objects that they need. Locally, they will work as if they own the all the objects.
During sync, the client would first download any changes in the objects, and automatically synchronize what is obvious. In your example, if there is a new tag coming from the server with the same name, then it would update the UUID correspondingly on the local system.
This would also be a nice place in which to detect and handle cases like data committed from another client, but by the same user.
Once the client has an updated and merged version of the data, you can do an upload.
There will be to round trips, but I see no way of doing this without overcomplicating the data structure and having potential pitfalls in the way you do the sync.
As a totally out of left-field suggestion, I'm wondering if using something like CouchDB might work for your situation. Its replication features could handle a lot of your online/offline synchronisation problems for you, including mechanisms to allow the application to handle conflict resolution when it arises.