under what circumstances (if any) can I continue to run "out of date" GWT clients when I update my GAEJ version? - google-app-engine

following on from this question:
GWT detect GAE version changes and reload
I would like to further clarify some things.
I have an enterprise app (GWT 2.4 & GAEJ 1.6.4 - using GWT-RPC) that my users typically run all day in their browsers, indeed some don't bother refreshing the browser from day to day. I make new releases on a pretty regular basis, so am trying to streamline the process for minimal impact to my users. - Not all releases concern all users, so I'd like to minimize the number of restarts.
I was hoping it might be possible to do the following. Categorize my releases as follows:
1) releases that will cause an IncompatibleRemoteServiceException to be thrown
and 2) those that don't : i.e. only affect the server, or client but not the RPC interface.
Then I could make lots of changes to the client and server without affecting the interface between the two. As long as I don't make a modification to the RPC interface, presumably I can change server code and or client code and the exception won't be thrown? Right? or will any redeployment of GAE cause an old client to get an IncompatibleRemoteServiceException ?
If I was able to do that I could batch up interface busting changes into fairly infrequent releases and notify my users a restart will be required.
many thanks for any help.

I needed an answer pretty quick so I thought I'd just do some good old fashioned testing to see what's possible. Hopefully this will be useful for others with production systems using GWT-RPC.
Goal is to be able to release updates / fixes without requiring all connected browsers to refresh. Turns out there is quite a lot you can do.
So, after my testing, here's what you can and can't do:
no problem
add a new call to a RemoteService
just update some code on the server e.g. simple bug fix, redeploy
just update some client (GWT) code and redeploy (of course anyone wanting new client functionality will have to refresh browser, but others are unaffected)
limited problems
add a parameter to an existing RemoteService method - this one is interesting, that particular call will throw "IncompatibleRemoteServiceException" (of course) but all others calls to the same Remote Service or other Remote Services (Impl's) are unaffected.
Add a new type (as a parameter) to any method within a RemoteService - this is the most interesting one, and is what led me to do this testing. It will render that whole RemoteService out of date for existing clients with IncompatibleRemoteServiceException. However you can still use other RemoteServices. - I need to do some more testing here to fully understand or perhaps someone else knows more?
so if you know what you're doing you can do quite a lot without having to bother your users with refreshes or release announcements.

Related

Will Jira complain if I set the Resolution date to a date before the creation via direct DB write?

Some colleagues were using an Excel file to keep track of some issues, and they have decided to switch to a better system, asking me to setup a Jira project for them and to import all the tickets. A way or the other I have done everything, but the resolution date is now wrong, because it's the one of when I ran the script to import them into Jira. They would like to have the original one, so that they can know when an issue was really fixed. Unfortunately there's no way to change it from Jira's interface, so I have to access the DB directly. The command, for the record, is like:
update jiraissue
set RESOLUTIONDATE = "2015-02-16 14:48:40"
where pkey = "OV001-1";
Now, low-level writes to a database in general are dangerous, and I am wondering whether there can be any risks. Our test server is not available right now, so I'd have to work directly on the production one. One thing I had seen on our test server is that this seemed to work, except that JQL queries such as
resolved < 2015-03-20
are wrong because they still use the old Resolution date. Clearly, I have to reindex; but I'm wondering whether it is safe. Does Jira perform some consistency checks? Like, verifying that a ticket is solved after it is created. In my case, since I have modified the resolution date but not the creation, it is a clear inconsistency. Will Jira complain about this? Is there the risk to corrupt the DB? And if I also modify the creation date, do I have to watch out for other things?
We are using Jira 5.2.11.
I have access to the test server again, and I have tried it. I have modified all the RESOLUTIONDATE fields I had to fix, and when I reloaded the page the new date was there. Jira didn't complain about anything. I reindexed the server, so that queries yield correct results, and I saw no issues. Then I even ran the integrity checks (Administration -> System -> Integrity Checker), and no error was found.
Finally I did the same on the production server, and again everything is running fine.
I can therefore conclude that the operation is not dangerous at all, and it can be done safely.

Has the JCA Ever Worked? It can't according to its documentation

I am working on a JCA implementation of Jackrabbit with a custom JAAS login module. The idea is to integrate Apache Shiro authentication and authorization via the login module, using a RepositoryLoginContext that simply furnishes user name and password from a Shiro token for the callback functions.
(I have only a small number of users, which are configured with a shiro.ini file.)
All the pieces seem to fit until I try connecting to the repository. One attempt is
SimpleCredentials userJCACredentials = new SimpleCredentials(username,shriroUserCreds.getPassword().toString());
but I cannot seem to find a build path JAR that makes it happy. The Jackrabbit API docs have me flummoxed. If I look for SimpleCredentials, I get a Day Software page (!) however if I look for CryptedSimpleCredentials, I get an Apache page. Thinking that perhaps only the crypted version can be used with the resource adapter, I tried changing to that but run into the same problem with
connInfo = new JCAConnectionRequestInfo(cryptedCreds,workspaceName)
which only wants SimpleCredentials in the first argument. I keep finding dead ends in the API, such as JCAConnectionRequestInfo(Credentials creds, String workspace). If you click on the Credentials link, it times out. Another gem is one of JCAManagedConnectionFactory's constructors, which has text about IBM Websphere (!!).
I tried writing my own class (based onsimple credentials) implementing Credentials interface and the error with
new JCAConnectionRequestInfo(cryptedCreds,workspaceName)
turned into inability to resolve javax.jcr.Credentials. With a Jackrabbit installation, the javax.jcr path does not exist (at least for the resource adapter version).
Failing to make any sense of the foregoing, I tried a second approach.
repoParameters.put("homeDir",ARCHIVE_REPO_DIR);
repoParameters.put("configFile",ARCHIVE_REPO_CONFIG);
repoMan = JCARepositoryManager.getInstance();
repo = repoMan.createRepository(repoParameters);
The last line, using a Map argument, was prompted by Eclipse auto completion, in conflict with the documentation showing createRepository( string, string ). In any case, an error about resolution of javax.jcr.Repository appears. Back to the same stuff.
I've explored every bottom up path in the libraries to get a Session. It seems to be impossible, ultimately failing for a non-existent definition.
Looking the source code, I've assembled the following
JCAManagedConnectionFactory mcf = new JCAManagedConnectionFactory();
mcf.setConfigFile(...);
mcf.setHomeDir...);
try {
mcf.setLogWriter(lpw);
connectionFactory = mcf.createConnectionFactory();
} catch (ResourceException rex) {
logger.error("client session failed to create connection factory");
logger.error(rex);
} finally {
success = false;
}
if (success) {
repo = (RepositoryImpl) connectionFactory;
session = repo.login( needs Credentials Here );
}
The login call needs Credentials and a workspace name. If the login is going to be handled by JAAS, I'd expect to find a login() with no arguments wherein the JAAS login would take over.
I used jackrabbit a couple of years ago in a project and had some small success with it but then moved from jackrabbit to modeshape which is an alternative implementation of the JCR (JSR-283. This is a very active project in which I have had a (very small) involvement. Development continues at a great rate with new releases regularly coming out.
I started with version 2.8 which wasn't bad but the 3.0 release was an almost complete rewrite with a focus on performance and was a delight to use. 4.0 is now in the last stages of being released.
The changeover from Jackrabbit (2.2.7) to Modeshape (2.8.1) was relatively painless with most of the effort in setting up the configuration and runtime.
I'm not saying you won't find problems like you are seeing in Jackrabbit but if you ask a question in the modeshape forums you will get a good answer and plenty of ongoing help.
Lessons Learned:
Integrating Shiro was not a great idea, but not at all because of Shiro. The JCA requires JAAS login. "Integrating" Shiro requires enough understanding of JAAS that I might as well just adopt JAAS and be done with it. But getting further along, I realized that it was fine. There are larger issues ahead.
JAAS is, well, JAAS. I was far from able to breeze through it and pick up what I needed although I have gotten pretty far. The answer to the last dilemma in the post is simply that the login method called is not found in the JCA, it's found in the JAAS login module that you code and "register" through the Geronimo admin console by creating a new security real for you application. A large part of the JAAS learning curve is that it the model is very abstract. There is no "JAAS in a nutshell for Geronimo" to enlighten would-be programmers. Another part is that it is so widely used, information specific to everything but what I'm doing seems ubiquitous. It isn't surprising because web apps are by far it's greatest usage. And following that, it is not really surprising to find a JAAS login module with Jackrabbit as the JCR was envisioned for web content.
It turns out that burrowing into JAAS, though enlightening, was a complete waste of time.
After putting the effort into getting JAAS together, I finally had a Jackrabbit Session Handle. I immediately put it to work on a utx wrapped piece of old code.
Things stopped cold here. None of my old code worked, starting with getRootNode(). I let Eclipse show me the possible methods for a session handle and I tested each one. The list of those that work is short. Some UTX related stuff works, as does "equals", hasCapabilty, isLIve, and little else. Sixty plus methods of the Session Handle cannot work. THIS MEANS THAT THERE IS NO WAY TO GET A NODE. WHICH MEANS YOU CANNOT DO ANYTHING USEFUL WITH THE JCA.
As to my original question, I will say that at one point the JCA probably did work. After all, no one would code all that stuff just to have it lined out. And I will also say that whenever the code was modified to use JAAS, it was badly broken and rendered useless.
This really is a soapbox moment. The amount of time I invested in this was significant. One reason I wanted to use Jackrabbit was that it could run in the same JVM as my application, in Geronimo. I can understand documentation that isn't up to par and other realities of open source. But I could only speculate about how this happened, and the alternatives are all pretty negative. I smelled a rat earlier but said nothing.
One thing is clear. The JCA (and for all I know, the rest of the Jackrabbit project) has no business being touted as an Apache Project. It is a disservice to their name and an extreme disservice to developers who've been led down a dead end path.
I am not sure how to go about it, but I want to bring this to the attention of the Apache foundation if for no other reason than preventing anyone else from going through this.

Best practice for updating silverlight deployment that is actively being used

I am currently running a SL3 project where we are in a highly iterative development mode with about 25 active test customers. I am making small changes at a clip of about 4 new builds per day. It is important to know this application is mission critical line of business for these 25 people, it is the tool they use all day to do their work so they are using it constantly and often launch their browser and the app in the morning and never close it until the end of the day.
The challenge is that when I make an update to the application I have no clean way to notify the users, in most cases this is ok as it is rare that I introduce a data contract change or something that would be a classic 'breaking' change to the app/service. Users keep plugging along and will get the change next time they refresh.
Right now we have resorted to emailing everyone and telling them to force refresh or close the browser and log back in.
Surely there is a better way...
Right now my train of thought is to have a method on the server that compares client xap versions and determines if the client being used is the most up to date, if so I will notify the user and make them update.
What have you done to solve this problem?
One way of doing it is to use a push mechanism (I used Kaazing Websoocket Gateway but any would do). When a new version of the XAP is released a message (either manually entered into the system by admin or automated triggered by XAP file change event) would be sent to all the clients. In the simplest scenario some notification would be shown to a user (telling him that a new version is released and the application needs to refresh) and then the app would refresh (by simply reloading the page) saving user's state if necessary.
If I would do this I would just keep it simple. A configuration value in web.config and a corresponding service method that simply returns that value (the value itself could be anything, but a counter is probably wise). Then you could have your Silverlight app poll that service method at regular intervals. Whenever the value changes (which you would do manually when you deploy a new version), just pop up a dialog telling the user to refresh the browser or log in/out. This way you don't have to force them to refresh every time. If you go with the idea of comparing xap file versions they will always be required to refresh, even for non-breaking changes.
If you want to take it further you could come up with some sort of mechanism to distinguish between different severity levels. For instance, if the new config value would contain the string "update_forced", you could force the users to reload the app by logging them out automatically (a little harsh, perhaps). If it contains the string "update_recommended", just show a little icon at the top right corner saying that there is a new version and that they should upgrade in their own time.
Granted, this was targeted at Silverlight 3, but with the PollingDuplex client and such in the newer versions of Silverlight, you could publish an "Update Now" bit to the clients, and build a mechanism in the client to alert the user that there is an update that is now out... that they should update it shortly, etc. You may even be able, through serialization and such, to save the state that they are in when they close the app to reload it.
We've done stuff similar with a LOB app that we built, so that as users are changing things, the rest of the userbase sees those changes immediately. Next up will be putting the flags in to change authorization and upgrades "on the fly" if you will.

Logging when application is running as XBAP?

Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).

Best Practice for seeing live data on the dev server?

Assumption: live/production web app suppresses errors being shown to end-users.
Suppose your tech support team wants to see live data but through the eyes of the development-side of the application (maybe you want to see what errors are occurring, or want to see when you've got an issue fixed using an end-user's data).
Right now we've got one database serving both the dev and live boxes (not my idea - I know it's gross).
Ideas?
Edit: Best/handy tools for implementing your suggestion?
We replicate the data back to a different database. Yes, there is a delay, but it keeps people hands out of the production servers. This also allows us to "hide" information that tech support (and other people for that matter) aren't supposed to see.
In addition to replicating data down, on production, we see who's logged into the application, and if it's a member of the company, send them to the real error page versus the happy kitten playing with a ball of yarn apologizing.
Back up and restore from live to dev on a regular basis (once, twice a day). It doesn't need to be realtime (as you might be entering data from the dev side anyway, which could cause problems).
If you have PCI or HIPAA data, make sure you don't put that in your dev environment -- that might break laws.
I generally like to have a 3-tier system for web development:
Development
Testing
Live
Most of the time testing is an exact copy of the live system, except that errors are turned on, when a new version is about to be moved live it's replaced with the new version BEFORE live is, to detect upgrade issues.
Development is completely separate from live, to allow for major changes to things like the database, or changes to the production environment.
I would firstly make errors are either emailed to someone with details of how the user got there or at minimum logged so you can watch the error log while you perform similar actions to see if you get the same messages in the log.
And yes, copying the database on the dev server/site is probably your only option. You don't want any changes made by the development team to live data and you'll probably also have changes that won't work with the production database at some point.
I wouldn't recommend doing a nightly copy as a developer might be in the middle of some new feature where they have added data and then it's erased that night. I usually copy the production database(s) to dev each time a major version is released. This also allows me to do speed testing with a lot of live data. On some systems I also change everyones password to a default so I can login easily as any user.
If your configuration permits it:
a. Add a logging function (if there isn't one already) to write messages of interest to a log file.
b. Run the unix command
tail -f < logfile.txt
which will stream the growing log file to your console.
http://www.monkey.org/cgi-bin/man2html?tail
If you have Windows, you might try this:
http://tailforwin32.sourceforge.net/

Resources