Spring AOP pointcut and advice in separate module - spring-aop

In my web application I have three modules, A B and C. B depends on A and C depends on B. Now, I want an aspect to define the behavior of methods in module B.
If I do not want to define the aspect in same module B, then should I define the Aspect in module A or C? Or it does not matter at all?
I am wondering if aspect weaving in spring is affected by build dependency.

It depends on which kind of aspect weaving you are asking. The kind of weaving that Spring does is done while the container is built. So you can perfectly weave new aspects to already compiled and packaged classes at a latter time. With the context you introduced, I usually declare aspects in the C module, which is my application, and leave the libraries A and B alone.

Related

Google App Engine upgrading part by part

I have a complex appengine service that was written in PHP, now I want to migrate it to Python part by part.
Let's say that my service has 2 parts: /signIn/.... and /data/.... I just want to migrate /signIn/ part first, then /data/ later.
However, since my service is big, so I want to build new /signIn/ part in Python, then use Traffic Splitting to make some A/B Testing on this part.
My problem is that Traffic Splitting can be applied on versions only, so my old and new versions have to be in same module, and same module means that they have to written in same language (I was wrong here, see updated part). But I am migrating from PHP to Python.
What is the best solution for me?
Thanks,
Solution
With Dan Cornilescu's helping, this is what I do:
Split the app into 2 modules: default and old-version.
Dispatch /signIn/ into default module, the rest to old-version module.
Make another version of /signIn/ (default module) in Python
Configure Traffic Splitting to slowly increase requests percent into Python part. This will allow us to test and make sure there is no serious bug happen.
Note:
The /signIn/ part must be default module, since GAE's traffic splitting works at default module only.
I confirmed that we can make 2 versions in different language for a module.
One possible approach is to split your PHP app in modules in a 1st step. It's not a completely wasted effort, most of that will be needed anyways to just allow your app to work in multiple modules, not related to the language change. I suspect this is actually why you can't use A/B testing - mismatch between the modules. Unavoidable.
Once the split in modules is done then you can go on with your 2nd step - switching the language for selected module(s), with A/B testing as you intended.
A more brave approach is to mix the 2 and write the /signin/ module directly in python. On the PHP side you'd just remove the /signin/ portion (part of the earlier mentioned 1st step). Should work pretty well as long as you're careful to only use app language independent means for inter-module communication/operation: request paths, cookies, datastore/memcache keys, etc. A good module split would almost certainly ensure that.
You have testing options other than A/B, like this one: https://stackoverflow.com/a/33760403/4495081.
You can also have the new code/module able to serve the same requests as the old one, side-by-side/simultaneously and using a dispatch.yaml file to finely control which module actually serves which requests. This may allow a very focused migration, potentially offering higher testing confidence.
I'm also not entirely sure you can't actually have 2 versions of the same module in different languages - the versions should be pretty standalone instances, each serving their own requests in their own way using, at the lowest layer, the language-independent GAE infra services. AFAIK nothing stops a complete app re-write and deployment, with the same version or not - I've done that when learning GAE. But I didn't switch languages, it's true. I'd give it a try, but I don't have time to learn a new language right now :)

can i have aspectj for Camel Component(marshal and unMarshal)?

i tried to have aspectj for camel processor, but it is not working. My pointcut is below:
#Around("execution(* org.apache.camel.processor.UnmarshalProcessor.*(..))")
Will it possible to do aspect for camel processor?? if yes, help.
Yes, you can if you put the library on the inpath in a compile-time weaving scenario, creating modified versions of the 3rd party class files and using them during runtime.
In a load-time weaving scenario you can also do it dynamically if the weaving agent is loaded before the Camel classes, which should usually be the case.
As a work-around you can change the pointcut type from execution() to call(), intercepting the callers in your own code rather than the callee in the 3rd party library.
So you have at least three options, all of which work with AspectJ (not in an "AOP lite" variant like Spring AOP though).

Understanding CakePHP: What is the recommended way to modularize an application?

Usually I work with Zend Framework and now I'm developing an application on CakePHP and trying to understand the framework, in particular how to modularize an application.
In ZF there are modules. Every logical subdivision of an application can (and should) be packed to a separate module. It allows to keep the application structure clear.
There are no modules in CakePHP -- instead the framework provides plugins and I firstly thougth, plugins are the "modules" in CakePHP. But a plugin in CakePHP seems to be something more, than a ZF's module -- "behaving much like it would if it were an application on its own". Plugins should apparently be used for bigger things like a blog or a forum, that have characteristics of an independent apllication. So logical units like User, Order, Payment, or CustomerFeedback, that only make sense within the application, are probably not suitable as plugins.
Is there a recommended way / What is the recommended way in CakePHP to separate an application into small and well manageable logical parts and so to build a modularized application?
CakePHP plugins and ZF modules have a little bit in common, both can contain MVC logic, library code, configuration, assets, tests, etc., and can pretty much behave like applications on their own.
However there's no need for such "whole application on its own" level of complexity in order for plugins to be the recommended way of managing logic. Of course they can also be used for less complex stuff like splitting your application into logical (and reusable) units.
A payment plugin, a user management plugin, an order processing plugin, a feedback plugin, that all makes sense, and it's neither wrong nor discouraged to use plugins in such a manner.
If you need some inspiration, check out Croogo for example, it's a CakePHP based CMS that manages its different parts using plugins.

How to wrap a C library so that it can be called from a web service

We have a library with very complex logic implemented in C. It has a command line interface with not too complex string-based arguments. In order to access this, we would like to wrap the library so that it can be accessed with simple XML RPC or even straightforward HTTP POST calls.
Having some experience with Java, my first idea would be
Wrap the library in JNI/JNA
Use a thin WS stack and a servlet engine
Proxy requests through Apache to the servlet engine
I believe there should already be something simple that could be used, so I am posting this question here. A solution has the following requirements
It should be deployable to a current linux distribution, preferrably already available via package management
It should integrate with a standard web server (as in my example Apache)
Small changes to the library's interface should be manageable
End-to-end (HTTP-WS-library-WS-HTTP) the solution should not incur too much overhead, but reliability is very important
Alternatively to the JNI/JNA proposal, I think in the C# world it should not be too difficult to write a web service and call this unmanaged code module, but I hope someone can give me some pointers that are feasible in regards to the requirements.
If you're going with web services, perhaps Soaplab would be useful. It's basically a tool to wrap existing command line applications in SOAP web services. The web services it generates look a bit weird but it is quite a popular way to make something like this work.
Creating an apache module is quite easy and since your familiar with xmlrpc you should check out mod-xmlrpc2. You can easily add your C code to this apache module and have a running xmlrpc server in minutes
I think you may also publish it as a SOAP based web service. gSoap can be used to provide the service interface out of the library. Have you explored gSOAP? See http://www.cs.fsu.edu/~engelen/soap.html
Regards,
Kangkan
Depends what technology you're comfortable with, what you already have installed and working on your servers, and what your load requirements are.
How about raw CGI? Assuming the C code is stateless between requests, you can do this without modifying the library at all. Write a simple script which pulls the request parameters out of the CGI environment, perhaps sanitises the input, calls the library via the command-line interface, and packages the result into whatever HTTP response you want. Then configure Apache to dispatch the relevant URL(s) to this script. Python, for example, has library support for XML-RPC, and so does every other scripting language used on the web.
Servlets sound like overkill, but for instance if you want multiple requests per CGI process instantiation, and don't feel like getting involved in Apache configuration, then it might be easiest to stick with what you know.
I'm doing a similar thing with C++ at the moment. In my case, I'm writing a PHP module to allow PHP scripts to access the logic in my C++ library.
I can then use whatever format I want to allow the rest of the world to see it - initially it will just be through a PHP web application but I'll also be developing an XML-RPC interface.
If you're going down the JNI route, check out SWIG.
http://www.swig.org/Doc1.3/Java.html
Assuming you have headers to project bindings with, swig is pretty easy to work with.

What are best development practices for multi JRE version support?

Our application needs to support 1.5 and 1.6 JVMs. The 1.5 support needs to stay clear of any 1.6 JRE dependencies, whereas the 1.6 support needs to exploit 1.6-only features.
When we change our Eclipse project to use a 1.5 JRE, we get all the dependencies flagged as errors. This is useful to see where our dependencies are, but is not useful for plain development. Committing source with such compile errors also feels wrong.
What are the best practices for this kind of multi JRE version support?
In C land, we had #ifdef compiler directives to solve such things fairly cleanly. What's the cleanest Java equivalent?
If your software must run on both JRE 1.5 and 1.6, then why don't you just develop for 1.5 only? Is there a reason why you absolutely need to use features that are only available in Java 6? Are there no third-party libraries that run on Java 1.5 that contain equivalents for the 1.6-only features that you want to use?
Maintaining two code bases, keeping them in sync etc. is a lot of work and is probably not worth the effort compared to what you gain.
Java ofcourse has no preprocessor, so you can't (easily) do conditional compilation like you can do in C with preprocessor directives.
It depends ofcourse on how big your project is, but I'd say: don't do it, use Java 5 features only, and if there are Java 6 specific things you think you need, then look for third-party libraries that run on Java 5 that implement those things (or even write it yourself - in the long run, that might be less work than trying to maintain two code bases).
Compile most of your code as 1.5. Have a separate source directory for 1.6-specific code. The 1.6 source should depend upon the 1.5, but not vice-versa. Interfacing to the 1.6 code should be done by subtyping types from the 1.5 code. The 1.5 code may have an alternative implementation rather than checking for null everywhere.
Use a single piece of reflection once to attempt to load an instance of a root 1.6 class. The root class should check that it is running on 1.6 before allowing an instance to be created (I suggest both using -target1.6` and using a 1.6 only method in a static initialiser).
There are a few approaches you could use:
Compile against 1.6 and use testing to ensure functionality degrades gracefully; this is a process I've worked with on commercial products (1.4 target with 1.3 compatibility)
Use version-specific plugins and use the runtime to determine which to load; this requires some sort of plugin framework
Compile against 1.5 and use reflection to invoke 1.6 functionality; I would avoid this because of the added complexity over the first approach at reduced performance
In all cases, you'll want to isolate the functionality and ensure the generated class files have a version of 49.0 (compile with a 1.5 target). You can use reflection to determine method/feature availability when initializing your façade classes.
You could use your source control to help you a little if it does branching easily (git, svn or Perforce would work great). You could have two branches of code, the 1.5 only branch and then a 1.6 branch that branches off of the 1.5 line.
With this you can develop in 1.5 on the 1.5 branch and then merge your changes/bugfixes into the 1.6 branch as needed and then do any code upgrades for specific 1.6 needs.
When you need to release code you build it from whichever branch is required.
For your Eclipse, you can either maintain two workspaces, one for each branch or you can just have two sets of projects, one for each branch, though you will need to have different project names which can be a pain. I would recommend the workspaces approach (though it has its own pains).
You can then specify the required JVM version for each project/workspace as needed.
Hope this helps.
(Added: this would also make for an easy transition at such time when you no longer need the 1.5 support, you just close down that branch and start working only in the 1.6 branch)
One option would be to break the code into 3 projects.
One project would contain common stuff which would work on either version of java.
One project would contain the java6 implementations and would depend on the common project.
One project would contain the java5 implementations and would depend on the common project.
Breaking things into interfaces with implementations that implement those interfaces, you could eliminate any build dependencies. You would almost certainly need dependency injection of one kind or another to help wire your concrete classes together.
Working in Eclipse, you could set the java6 project to target java6, and the other 2 projects to target java5. By selecting the JRE on a project by project basis, you'd show up any dependencies you missed.
By getting a little clever with your build files, you could build the common bit both ways, and depend on the correct version, for deployment - though I'm not sure this would bring much benefit.
You would end up with two separate versions of your application - one for java6, one for java5.

Resources