Our application needs to support 1.5 and 1.6 JVMs. The 1.5 support needs to stay clear of any 1.6 JRE dependencies, whereas the 1.6 support needs to exploit 1.6-only features.
When we change our Eclipse project to use a 1.5 JRE, we get all the dependencies flagged as errors. This is useful to see where our dependencies are, but is not useful for plain development. Committing source with such compile errors also feels wrong.
What are the best practices for this kind of multi JRE version support?
In C land, we had #ifdef compiler directives to solve such things fairly cleanly. What's the cleanest Java equivalent?
If your software must run on both JRE 1.5 and 1.6, then why don't you just develop for 1.5 only? Is there a reason why you absolutely need to use features that are only available in Java 6? Are there no third-party libraries that run on Java 1.5 that contain equivalents for the 1.6-only features that you want to use?
Maintaining two code bases, keeping them in sync etc. is a lot of work and is probably not worth the effort compared to what you gain.
Java ofcourse has no preprocessor, so you can't (easily) do conditional compilation like you can do in C with preprocessor directives.
It depends ofcourse on how big your project is, but I'd say: don't do it, use Java 5 features only, and if there are Java 6 specific things you think you need, then look for third-party libraries that run on Java 5 that implement those things (or even write it yourself - in the long run, that might be less work than trying to maintain two code bases).
Compile most of your code as 1.5. Have a separate source directory for 1.6-specific code. The 1.6 source should depend upon the 1.5, but not vice-versa. Interfacing to the 1.6 code should be done by subtyping types from the 1.5 code. The 1.5 code may have an alternative implementation rather than checking for null everywhere.
Use a single piece of reflection once to attempt to load an instance of a root 1.6 class. The root class should check that it is running on 1.6 before allowing an instance to be created (I suggest both using -target1.6` and using a 1.6 only method in a static initialiser).
There are a few approaches you could use:
Compile against 1.6 and use testing to ensure functionality degrades gracefully; this is a process I've worked with on commercial products (1.4 target with 1.3 compatibility)
Use version-specific plugins and use the runtime to determine which to load; this requires some sort of plugin framework
Compile against 1.5 and use reflection to invoke 1.6 functionality; I would avoid this because of the added complexity over the first approach at reduced performance
In all cases, you'll want to isolate the functionality and ensure the generated class files have a version of 49.0 (compile with a 1.5 target). You can use reflection to determine method/feature availability when initializing your façade classes.
You could use your source control to help you a little if it does branching easily (git, svn or Perforce would work great). You could have two branches of code, the 1.5 only branch and then a 1.6 branch that branches off of the 1.5 line.
With this you can develop in 1.5 on the 1.5 branch and then merge your changes/bugfixes into the 1.6 branch as needed and then do any code upgrades for specific 1.6 needs.
When you need to release code you build it from whichever branch is required.
For your Eclipse, you can either maintain two workspaces, one for each branch or you can just have two sets of projects, one for each branch, though you will need to have different project names which can be a pain. I would recommend the workspaces approach (though it has its own pains).
You can then specify the required JVM version for each project/workspace as needed.
Hope this helps.
(Added: this would also make for an easy transition at such time when you no longer need the 1.5 support, you just close down that branch and start working only in the 1.6 branch)
One option would be to break the code into 3 projects.
One project would contain common stuff which would work on either version of java.
One project would contain the java6 implementations and would depend on the common project.
One project would contain the java5 implementations and would depend on the common project.
Breaking things into interfaces with implementations that implement those interfaces, you could eliminate any build dependencies. You would almost certainly need dependency injection of one kind or another to help wire your concrete classes together.
Working in Eclipse, you could set the java6 project to target java6, and the other 2 projects to target java5. By selecting the JRE on a project by project basis, you'd show up any dependencies you missed.
By getting a little clever with your build files, you could build the common bit both ways, and depend on the correct version, for deployment - though I'm not sure this would bring much benefit.
You would end up with two separate versions of your application - one for java6, one for java5.
Related
I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries
Sometimes I don't notice that Netbeans imports the wrong packages inside a Codename One project. It causes me to waste time until I notice a such sneaky mistake. This happens me a lot of times, especially when I'm a bit tired of coding...
Is there any way to force Netbeans to don't propose and don't do any automatic import from packages different from the ones provided by Codename One and created by me inside my project?
Of course, if it's possible, it should be applied only to Codename One projects. I have also a Spring Boot project that, of course, needs different imports.
Currently I'm using Netbeans 10 with Java 8. Thanks for any hint.
Short answer is, it's theoretically possible but really hard. It would open the door for far worse problems.
The last time we checked about that it was only possible in two ways:
If we copied the entire Java module and built on top of that
If we built Codename One as a JDK
Both options are a bit problematic. The former would mean we would need to maintain the full Java package code and update it with changes to the IDE. We don't want to do that.
The latter would also be problematic since we don't support any officially supported subset of a JDK. It would also break the existing project structure and make things like running the project much harder.
I am starting my project structure from scratch. I am using require.js, backbone, underscore, bootstrap, etc. I was thinking to use shim config to load non AMD compatible of backbone, underscore,etc. But, now, i think its better to use AMD (Asynchronous Module Definition) compatible version of them since it allows to load parallely the resources. But, where can i find reliable source for AMD compatible underscore, backbone and bootstrap? And can i be assured that I will alz get latest version of backbone, bootstrap and underscore AMD compatble version. Will they cause any break later?
In word, can anyone suggest me to use AMD Compatible version of them or tade off to use shim config to load non-amd version of them against loading time. I am planning to use require-jquery AMD.
I can only provide one point of view, but from my experience, at this stage, it's better just to shim the dependencies. I don't feel that amd is widely adopted enough yet to get the kind of support you'll need to make everything work nicely together using the amd versions.
In particular, I had a problem with testing (Jasmine), where my Jasmine tests would be referring to one "jQuery" and my application code would be referring to another one, because neither were globals. I just gave up and switched back to using shims, and managed to get the tests to work (although not without some work).
Not sure if it will help, but here are my personal notes on integrating RequireJS into a BackboneJS/Rails stack. The section on stubbing dependencies might be of interest if you'll be testing your client-side code. I hit quite a few snags along the way...
Yes, it is better. I can say that after developing largescale apps with require and backbone - they work great together. Use a build process that uses r.js to boil your app js down to a single file so there isn't a dependency on production obviously. We have had no problems integrating this with jasmine as a unit tester in response to the answer above (not that I would personally bother with unit testing, stick with behavioural testing instead).
This is a good starting point for getting an idea of how it fits together: http://net.tutsplus.com/tutorials/javascript-ajax/a-requirejs-backbone-and-bower-starter-template/
Though consider jam as a package manager or none at all, and grunt to create build tasks etc but still useful just don't treat stuff as gospel try it yourself!
Personally I don't think using AMD version library is better.
Because
1. rely on community to maintain the AMD version
2. use shim and export the global is better
3. you cannot expect all libraries have AMD version
I have spent hours dig bugging why the optimizely code via rjs says that Backbone is not found and had to remove some code in the backbone source to make it works.
In short, use shim.
I'm starting work on a new version of a mobile site. I am looking into using an amd script loader and have pretty much narrowed it down to require and lsjs. I know there are many pro's and con's to both, but I am trying to figure all of those out for the mobile version of my site. Does anyone have experience with this lib's at the mobile level? Just trying to get a discussion going here of what people think the best way to go is. (anyone with a 1500 rep want to create an lsjs tag :) ). Maybe either of the creators of these libraries (todd burke or richard backhouse) have an opinion on this
thanks
EDIT:
thanks to Simon Smith for the great info down below. has anyone used lsjs? it looks very promising in terms of speed, but does not have the user base, documentation, or (i think) features of require/curl, but still looks very promising
I would say use RequireJS until you're ready to go to production. Then compile your scripts and replace RequireJS with Almond. It's a bare-bones library made by James Burke (author of RequireJS) so you can rely on it to work seamlessly:
Some developers like to use the AMD API to code modular JavaScript,
but after doing an optimized build, they do not want to include a full
AMD loader like RequireJS, since they do not need all that
functionality. Some use cases, like mobile, are very sensitive to file
sizes.
By including almond in the built file, there is no need for RequireJS.
almond is around 1 kilobyte when minified with Closure Compiler and
gzipped.
https://github.com/jrburke/almond
EDIT:
Curl.js is also an option. I haven't used it but know that is a lot smaller than RequireJS. Did a bit of research as to why:
RequireJS does the following over Curl (via James Burke):
Supports multiversion/contexts, useful for mock testing, but you can get by without it
Supports loading plain JS files via require, does not have to be an AMD module
Supports special detection and work with older versions of jQuery (should not be an issue if you use jQuery 1.7.1 or later)
(At the moment) better support for simplified wrapped commonjs style: define(function(require) {});
In short, if you are only going to deal with AMD modules in your app,
do not need the multiversion/context support, and are not using the
simplified commonjs wrapping style, or using an older jQuery, then
curl can be a good choice.
https://groups.google.com/forum/?fromgroups=#!topic/requirejs/niUyLZrivgs
And the author of Curl:
RequireJS runs in more places than curl.js, including WebWorkers and
node.js. It's also got more "battle testing" than curl.js, which may
mean it has less bugs around edge cases. curl.js is also missing a few
important features, such as preloading of implicit dependencies and
support for AMD-wrapped commonjs modules. These are both coming in
version 0.6 (late next week).
On the plus side, curl.js...
is as small as 1/4 the size of RequireJS -- even when bundled with the
js! and domReady! plugins it is still less than half the size.
is faster at loading modules than RequireJS, but only meaningfully so in
IE6-8 or in development (non-build) environments.
supports pluggable
module loaders for formats other than AMD (we're working on unwrapped
CJSM/1.1 and CJSM/2.0, for instance).
supports configuration-based
dependency management via IOC containers like wire.js (via cram.js).
supports inlining of css (via cram.js) and concatenation of css (via
cram.js 0.3 by end of year)
https://github.com/cujojs/curl/issues/35#issuecomment-2954344
Back in 2014 I faced the same problem. I had some extra requirements though in order to make the site fast on mobile:
Small enough to be inlined (to avoid paying an extra request-tax to
get the loader onboard).
Inlined config file (get rid of a request).
Config file in pure javascript (no parsing overhead).
Let the browser do the actual loading (browsers are smart these days) of files.
Connect all asynchronously loaded modules together.
Support for single page apps that include legacy code that uses sprinkled $(function(){...}) constructs, yet I insist on loading jQuery late and asynchronously to speed things up.
After evaluating RequireJS, curl, lsjs and a bunch of others, I concluded that none of them came close enough to what I needed for my projects. Eventually I decided to create my own lockandload AMD-loader. I didn't open-source it at the time, because that meant writing documentation. But I recently open-sourced it with fresh docs in case it benefits others.
I'm working on a very simple web app, written in Go language.
I have a standalone version and now port it to GAE. It seems like there is very small changes, mainly concerning datastore API (in the standalone version I need just files).
I also need to include appengine packages and use init() instead of main().
Is there any simple way to merge both versions? As there is no preprocessor in Go, it seems like I must write a GAE-compatible API for the standalone version and use this mock module for standalone build and use real API for GAE version. But it sounds like an overkill to me.
Another problem is that GAE might be using older Go version (e.g. now recent Go release uses new template package, but GAE uses older one, and they are incompatible). So, is there any change to handle such differences at build time or on runtime?
Thanks,
Serge
UPD: Now GAE uses the same Go version (r60), as the stable standalone compiler, so the abstraction level is really simple now.
In broad terms, use abstraction. Provide interfaces for persistence, and write two implementations for that, one based on the datastore, and one based on local files. Then, write a separate main/init module for each platform, which instantiates the appropriate persistence interface, and passes it to your main application to use.
My immediate answer would be (if you want to maintain both GAE and non-GAE versions) that you use a reliable VCS which is good at merging (probably git or hg), and maintain separate branches for each version. The GAE API fits in reasonably well with Go, so there shouldn't be too many changes.
As for the issue of different versions, you should probably maintain code in the GAE version and use gofix (which is unfortunately one-way) to make a release-compatible version. The only place where this is likely to cause trouble is if you use the template package, which is in the process of being deprecated; if necessary you could include the new template package in your GAE bundle.
If you end up with GAE code which you don't want to run on Google's servers, you can also look into AppScale.