We are using local storage module from angularjs.
https://github.com/grevory/angular-local-storage
Now can anybody suggest how we can set a cache dependency at client side, so that when data changes in the server the local storage is invalidated(forcing to fetch data afresh from server).
Right now in test environment, we had to ask the testers to clear browser cache each time there is a release. Cant move with this approach in production.
Thanks for helping.
Setup a version number in your app, and store it in the local storage when the user visits.
When the app is initialised, compare the local storage version with the App.version.
If it's different, clear local storage and re-render, reload, re-initialise or whatever you need to do for your app.
To handle the situation where a user has used your app before you implemented this behaviour, simply ensure it also clears local storage if it cannot find a version key in local storage to begin with.
Just a couple of further notes, relating to the comments.
I've had issues in the past with local storage due to different things happening. Part 3 goes beyond the scope of the question but is worth mentioning for completeness.
A new release: the application source code has changed, and the data it's storing is structured differently. Perhaps it's now JSON.stringified, for example. Or perhaps we were expecting a string and now we get an integer, which might break something with strict type checking.
This is solved using the approach described above. The app has changed, so the app has a new version, and it knows it cannot trust the data retrieved from local storage for a different version.
The problem you describe: the data on the server has been updated and the app's locally stored copy is out of date.
How do we tell the app that things have changed? Periodically request some token from the server, perhaps a timestamp, that we can compare to see if there was a change since we last accessed the data. This question talks about a number of different ways to do this.
The JavaScript itself has changed: we have a minified production build and now there is a new one, but the user still has the old build cached. This is a problem when the server is expecting different requests to what it will now receive from the new build.
The simple solution here is to tack a version number of some kind onto the end of the resource URL, so that the browser requests application.min.js?v=2 instead of application.min.js?v=1.
Related
This question is more theoretical than practical, hence i do not have any code to show for it.I using a PWA application in mobiles and I'm storing data in the local storage. so far so good. But if I force clear (remove the application from running in the background), I believe the local storage gets cleared, which is not ideal. Is there any way to prevent the os or browser from automatically clearing the local storage?
Some of the useful links which helps you understand the functionalities of local storage https://developer.chrome.com/apps/storage
Instead use Service worker as here
I am trying to add offline usage to an app. I simply need all the work done by NSURLRequest / NSURLCache, while being able to choose exactly the disk storage location, so I can put it in "/Library/Application Support/whatever" where it won't ever be deleted (without forgetting the flag so it's not synced on iCloud / iTunes).
I feel like I have to do all the work myself and I run in a first issue. NSURLCache is keeping in memory a dictionary where the keys are the NSURLRequest and the values are the associated NSCachedURLResponse. I'm doing the same but then, I can't write this dictionary on the disk as it isn't made of basic types.
Do you have an idea on how to write on disk such a dictionary?
I am in a similar situation, I need a cache that can be used when the app is offline or untile the app parses new data.
AFAIK everyone would recommend you: https://github.com/steipete/SDURLCache.
But theoretically in iOS6 the NSURLConnection writes the cache to disk and you can use that cache as offline cache, but I still have to found out how.
So partial answer, will try to find out more and update the answer. :)
OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address
I want to take on a project, but I’m not sure how to handle the updating process.
Normally, when asked to update a site, you back-up the database & site files, then make the updates locally or on a development server. Then when the updates are finished, you push them live.
My problem is that the site I’ll be working on registers new members every day, makes blog posts every day, and gets new comments on those posts every day. If I were to pull the site on Monday, update it in a testing environment, then push those changes live on Friday, every member who signed up and blog entry written during the week would be overwritten.
So what’s the best way to go about doing this? How do I update/add features to a site without losing the data gained on the live site during development? Surely it must be possible, since high-traffic sites like TechCrunch and Gizmodo make huge sitewide updates all the time without losing data.
It depends on what changes you're making. Is it file/template changes or database changes?
If it's just file changes, just pull the files and database to your local server, make changes to your files and then just push them (files only) to the live server when done. As long as no database changes have happened, that will work.
If there are db changes, things get a bit trickier. You would basically follow the same process, but make note of any db changes you are making on the local site. when everything is ready to be pushed to the live server, you have no other option but to take the site offline for users while you update.
You would then push all updated files to live server, and mirror any db changes you did on the local server (install/update plugins etc). When all that is done and tested, you can then put the site online again. Downtime should be minimal if you have made good notes on db changes.
This is dependant on being able to block access to users but still allow access for yourself, but that's standard with most CMSs.
Also, if you dont already you should look at integrating git into your workflow. If the changes you'll be making take a considerable amount of time you'll need a system in place where you can branch your code off into new versions while still keeping the original state of code that's on the live server.
That way, if there is an urgent fix that needs doing to the live site while your in the middle of developing new features locally, you can switch back to your master/original branch and make changes to the code that doesn't include any of the new stuff you have been working on the other branch.
Well, I've only done this for small traffic wordpress/drupal sites, but not having a "live" version hasn't been an issue for me. I have my development copy, make test the changes I want, and then roll those changes out to the live site on the fly by FTPing' them back up.
Are you going to be editing these registrations? Or are you just tweaking static files?
In the case of wordpress, I test a plugin out, and then just install it on the live site.
Typically the changes I'm making involve plugins/modules and some PHP stuff. This is obviously not the most nuanced solution, and I'm interested to see what more knowledgeable people have in mind.
Assumption: live/production web app suppresses errors being shown to end-users.
Suppose your tech support team wants to see live data but through the eyes of the development-side of the application (maybe you want to see what errors are occurring, or want to see when you've got an issue fixed using an end-user's data).
Right now we've got one database serving both the dev and live boxes (not my idea - I know it's gross).
Ideas?
Edit: Best/handy tools for implementing your suggestion?
We replicate the data back to a different database. Yes, there is a delay, but it keeps people hands out of the production servers. This also allows us to "hide" information that tech support (and other people for that matter) aren't supposed to see.
In addition to replicating data down, on production, we see who's logged into the application, and if it's a member of the company, send them to the real error page versus the happy kitten playing with a ball of yarn apologizing.
Back up and restore from live to dev on a regular basis (once, twice a day). It doesn't need to be realtime (as you might be entering data from the dev side anyway, which could cause problems).
If you have PCI or HIPAA data, make sure you don't put that in your dev environment -- that might break laws.
I generally like to have a 3-tier system for web development:
Development
Testing
Live
Most of the time testing is an exact copy of the live system, except that errors are turned on, when a new version is about to be moved live it's replaced with the new version BEFORE live is, to detect upgrade issues.
Development is completely separate from live, to allow for major changes to things like the database, or changes to the production environment.
I would firstly make errors are either emailed to someone with details of how the user got there or at minimum logged so you can watch the error log while you perform similar actions to see if you get the same messages in the log.
And yes, copying the database on the dev server/site is probably your only option. You don't want any changes made by the development team to live data and you'll probably also have changes that won't work with the production database at some point.
I wouldn't recommend doing a nightly copy as a developer might be in the middle of some new feature where they have added data and then it's erased that night. I usually copy the production database(s) to dev each time a major version is released. This also allows me to do speed testing with a lot of live data. On some systems I also change everyones password to a default so I can login easily as any user.
If your configuration permits it:
a. Add a logging function (if there isn't one already) to write messages of interest to a log file.
b. Run the unix command
tail -f < logfile.txt
which will stream the growing log file to your console.
http://www.monkey.org/cgi-bin/man2html?tail
If you have Windows, you might try this:
http://tailforwin32.sourceforge.net/