Prevent wrong imports in Netbeans - codenameone

Sometimes I don't notice that Netbeans imports the wrong packages inside a Codename One project. It causes me to waste time until I notice a such sneaky mistake. This happens me a lot of times, especially when I'm a bit tired of coding...
Is there any way to force Netbeans to don't propose and don't do any automatic import from packages different from the ones provided by Codename One and created by me inside my project?
Of course, if it's possible, it should be applied only to Codename One projects. I have also a Spring Boot project that, of course, needs different imports.
Currently I'm using Netbeans 10 with Java 8. Thanks for any hint.

Short answer is, it's theoretically possible but really hard. It would open the door for far worse problems.
The last time we checked about that it was only possible in two ways:
If we copied the entire Java module and built on top of that
If we built Codename One as a JDK
Both options are a bit problematic. The former would mean we would need to maintain the full Java package code and update it with changes to the IDE. We don't want to do that.
The latter would also be problematic since we don't support any officially supported subset of a JDK. It would also break the existing project structure and make things like running the project much harder.

Related

What is the problem of incompatibility of library versions and how monorepo-style solve it?

I started to interest in monorepo approach and Nx.js in particularly. Almost all articles talks that monorepo solve the problem of incompatibility of library versions and I don't quite understand the how. There I have few questions:
If i understood right, the idea of monorepo (in terms of shared code) that all shared code always the same version and all changes are happen in one atomic commit (as advertisement of monorepo states). So lets imagine monorepo with 100 of projects and all of them are depend on libA in the same repo. If I change smth in libA than I have to check changes in all dependent project. Moreover, I have to wait all codeowners to review my changes. So what is pros?
Lets imagine I have monorepo with following projects: appA, libC, libD and there are some third party library, let's call it third-party-lib. appA depends on libC and libD. At some time appA need third-party-lib-v3, BUT libC depends on third-party-lib-v1. https://monorepo.tools/#code-generation states that: "One version of everything
No need to worry about incompatibilities because of projects depending on conflicting versions of third party libraries.". But it is not. In world of Javascript it results in 2 different versions of third-party-lib in different node_modules. Angain what is pros?
I could be very naive in my questions because I never encountered problems with libraries, also I just started learning monorepo topic so I would be glad if someone help me to deal with it.
Having worked with shared code in a non-monorepo environment, I can say that managing internal packages without a monorepo like NX requires discipline and can be more time consuming.
In your example of 100 projects using 1 library, all 100 projects should be tested and deployed with the new version of the code. The difference is when.
In separate repos, you would publish the new version of your package, with all the code reviews and unit testing that go along with it. Next you would update the package version in all your 100 apps, probably one by one. You would test them, get code reviews, and then deploy them.
Now, what if you found an issue with your new changes in one of the apps? Would you roll back to the previous version? If it was in the app then you could fix it in that one app, but if it was in the library, would you roll back the version number in all your apps? What if another change was needed in your library?
You could find yourself in a situation where your apps are using different versions of your library, and you can't push out new versions because you can't get some of your apps working with the previous version. Multiply that across many shared libraries and you have an administrative nightmare.
In a mono-repo, the pain is the same, but it requires less administrative work. With NX, you know what apps your change is affecting and can test all those apps before you deploy your changes, and deploy them all at once. You don't block other changes going into your library because the changes aren't committed until they are tested everywhere they are used.
It is the same with third party libraries. When you update the version of a library, you test it in all applications that use it before your change is committed. If it doesn't work in one application, you have a choice.
Fix the issue preventing that application from working OR
Don't update the package to the new version
It means that you don't have applications that are 'left behind' and are forced to keep everything up to date. It does mean that sometimes updates can take so much time that they are difficult to prioritise, but that is the same for multi-repo development.
Finally, I would to add that when starting to work with NX you may find yourself creating large, frequently changing libraries that are used by all apps, or perhaps putting large amounts of code in the apps themselves. This leads to pain where changes frequently result in deployments of the whole monorepo. I have found that it is better to create app specific folders that contain libraries that are only used by that app, and only create shared libraries when it makes business sense to do so. Examples are:
Services that call APIs and return business domain objects that should not really be changed (changes to these APIs and responses generally result in a V2 of the API and a new NX library could be created to serve that V2 API, leaving the V1 unchanged).
Core, stable atomic UI libraries for each component (again, try not to change the component itself, but create a V2 if it needs to change)
More information on this can be found here NX applications and libraries

Camel app on Liberty - JAXB Marshalling

I'm running a Camel application on Liberty Profile server. I'm taking a message from a queue, unmarshalling, mapping then marshalling. This was working fine but now I'm getting an error that JAXBDataBinding method getContextualNamespaceMap is not found.
I think this is because there is an older version of the jar in the server libs but I don't know why it started using it.
IBM Jar: com.ibm.ws.org.apache.cxf-rt-databinding-jaxb.2.6.2_1.0.12
The issue is resolved if I switch to parent last class loading but its a very hacky way to fix it and is not a great option. Any other ideas? I'm thinking some feature or other dependency in my build may have pulled this jar in.
So it does look like getContextualNamespaceMap is only available in newer versions of the org.apache.cxf-rt-databinding-jaxb JAR than what is available in Liberty.
It might be that parentLast is the best option then. (You already know how to do this but it's documented (here). If it leads to some other issues then do follow-up with another question.
I suppose it's conceivable you might be able to look at whatever is packaged within your application and try removing a set of things and picking them up from the Liberty runtime, to avoid running in parentLast mode. E.g. if you are only referencing getContextualNamespaceMap because you have other code in your app but there is some alternative path you could have gone down entirely in the Liberty-provided modules, then in theory you could be OK.
I'm not familiar enough with the code paths in the modules in the CXF or Camel "stack" to guess whether that's a real-world likelihood though.
The javaee7 feature contained a jaxsw version that clashed with the server version. Removing the javaee7 feature has resolved this issue. Remains to be seen whether or not I will to add it back in.

Is the Meanstack suitable for production?

I have been looking at the various Meanstack frameworks out on the net - and whilst impressed with what they achieve I have one serious concern - the number of files used in a typical stack - meanstack.js uses over 15000 files whilst the bmean example has a modest 1900 in comparison.
The question I am asking myself is would I be happy to put my trust is such a system from a production view point - what happens when something goes wrong how easy is it going to be to find the answer? You can almost bet that when your most important customer logs on it is going to go haywire. Also what happens when Angular version 2 comes along it could require a complete rewrite but by then the stack your using has been customised and difficult to change?
Am I getting over concerned about the technology - my intended approach is to strip the client side code out of the bmean example and rewrite it with my own - at least that way I know (and control) what goes on in the client. Do you think this is the correct way to proceed?
With most systems there is a bit of preparation required before going to production. The same is true with mean.io (using multiple cpu's, improved aggregation, caching, etc etc)
The large number of files is essentially a product of the way npm handles dependencies. Each module is able to define independent versions of the same dependencies thus creating a bit of bloat but at the same time allowing a lot of flexability in nodejs code.
We currently have a number of mean.io projects in production phase and have been very happy with performance and the overall experience.
New releases of the project are scheduled every couple of months, upgrading should not be too much of a problem if you use the package system correctly.
Issues with the project are handled and managed through github issues additional support can be found on our irc (freenode #mean_io) channel as well as on facebook.
For commercial support have a look at the support page

VS2010 Different publishing locations based on configuration

I'm trying to divide my solution by three configurations:
Development
Testing
Release
All above will have different publishing location, so users can work with release, do their test in testing and see what is new in development release. All three versions will be build with different name postfixes and icons and installed on each user workstation.
For now I get :
Unable to install this application because an application with the
same identity is already installed. To install this application,
either modify the manifest version for this application or uninstall
the preexisting application."
I can't even install this more than once at one workstation.
So What can I do to achive this?
You can not install the same application multiple times unless you change the deployment. The easiest way to do this is by changing the assembly name. This article explains this.
As time past, I can now see that the solution was quite close, just required me to be able to specify my requirements first.
So, now I can tell that it mostly depends on number of such configurations:
if it is limited and low, i.e. live/test/dev, you can have each as separate project in solution, like AppLive, AppTest, AppDev, this requires refactoring to move everything that is common into separate projects, but it makes code and releases clearer and easier to manage.
if those configurations are unlimited, or number is high, than way to go is to load configurations from file and pick one from the pool based on custom logic.
Currently I'm using mix of both, as I want to be able to release test versions earlier than live, but also my application is used by multiple branches, and each of them has some unique styling, logos and such, so this is applied from embed xml file, and proper set is identified based on Active Directory entries.

What are best development practices for multi JRE version support?

Our application needs to support 1.5 and 1.6 JVMs. The 1.5 support needs to stay clear of any 1.6 JRE dependencies, whereas the 1.6 support needs to exploit 1.6-only features.
When we change our Eclipse project to use a 1.5 JRE, we get all the dependencies flagged as errors. This is useful to see where our dependencies are, but is not useful for plain development. Committing source with such compile errors also feels wrong.
What are the best practices for this kind of multi JRE version support?
In C land, we had #ifdef compiler directives to solve such things fairly cleanly. What's the cleanest Java equivalent?
If your software must run on both JRE 1.5 and 1.6, then why don't you just develop for 1.5 only? Is there a reason why you absolutely need to use features that are only available in Java 6? Are there no third-party libraries that run on Java 1.5 that contain equivalents for the 1.6-only features that you want to use?
Maintaining two code bases, keeping them in sync etc. is a lot of work and is probably not worth the effort compared to what you gain.
Java ofcourse has no preprocessor, so you can't (easily) do conditional compilation like you can do in C with preprocessor directives.
It depends ofcourse on how big your project is, but I'd say: don't do it, use Java 5 features only, and if there are Java 6 specific things you think you need, then look for third-party libraries that run on Java 5 that implement those things (or even write it yourself - in the long run, that might be less work than trying to maintain two code bases).
Compile most of your code as 1.5. Have a separate source directory for 1.6-specific code. The 1.6 source should depend upon the 1.5, but not vice-versa. Interfacing to the 1.6 code should be done by subtyping types from the 1.5 code. The 1.5 code may have an alternative implementation rather than checking for null everywhere.
Use a single piece of reflection once to attempt to load an instance of a root 1.6 class. The root class should check that it is running on 1.6 before allowing an instance to be created (I suggest both using -target1.6` and using a 1.6 only method in a static initialiser).
There are a few approaches you could use:
Compile against 1.6 and use testing to ensure functionality degrades gracefully; this is a process I've worked with on commercial products (1.4 target with 1.3 compatibility)
Use version-specific plugins and use the runtime to determine which to load; this requires some sort of plugin framework
Compile against 1.5 and use reflection to invoke 1.6 functionality; I would avoid this because of the added complexity over the first approach at reduced performance
In all cases, you'll want to isolate the functionality and ensure the generated class files have a version of 49.0 (compile with a 1.5 target). You can use reflection to determine method/feature availability when initializing your façade classes.
You could use your source control to help you a little if it does branching easily (git, svn or Perforce would work great). You could have two branches of code, the 1.5 only branch and then a 1.6 branch that branches off of the 1.5 line.
With this you can develop in 1.5 on the 1.5 branch and then merge your changes/bugfixes into the 1.6 branch as needed and then do any code upgrades for specific 1.6 needs.
When you need to release code you build it from whichever branch is required.
For your Eclipse, you can either maintain two workspaces, one for each branch or you can just have two sets of projects, one for each branch, though you will need to have different project names which can be a pain. I would recommend the workspaces approach (though it has its own pains).
You can then specify the required JVM version for each project/workspace as needed.
Hope this helps.
(Added: this would also make for an easy transition at such time when you no longer need the 1.5 support, you just close down that branch and start working only in the 1.6 branch)
One option would be to break the code into 3 projects.
One project would contain common stuff which would work on either version of java.
One project would contain the java6 implementations and would depend on the common project.
One project would contain the java5 implementations and would depend on the common project.
Breaking things into interfaces with implementations that implement those interfaces, you could eliminate any build dependencies. You would almost certainly need dependency injection of one kind or another to help wire your concrete classes together.
Working in Eclipse, you could set the java6 project to target java6, and the other 2 projects to target java5. By selecting the JRE on a project by project basis, you'd show up any dependencies you missed.
By getting a little clever with your build files, you could build the common bit both ways, and depend on the correct version, for deployment - though I'm not sure this would bring much benefit.
You would end up with two separate versions of your application - one for java6, one for java5.

Resources