How to use spring MVC annotations to create xml configuration file? - google-app-engine

As Google App Engine will start and stop instances regularly, and this means incurring the cold start time regularly, I'd like to configure my Spring MVC3 app using XML to avoid the 3-5 sec delay caused by scanning the class files for annotations when using annotation configuration when a new instance is spun up.
However writing the xml is a bit of a chore and much easier to use the annotations to define my configuration. So I'd like the best of both worlds and to use the annotations to generate the config file, and then turn off the scanning at runtime. From this question it seems there aren't any existing tools that will do this.
So what is the best way to approach this? Presumably there is a class which does the scanning in spring at runtime that could be re-appropriated to scan at design time and then spit out the xml?
Are there any limitations on things which can be done from the annotation configuration which will not be possible in the xml configuration?

I would do this using Spring for scanning the package that contains the annotated classes, then using reflection for getting the annotations on the class and its methods and writing the XML accordingly to them.
The class that does the scanning in Spring is ClassPathScanningCandidateComponentProvider. Here is a code snippet of how it can be used :
ClassPathScanningCandidateComponentProvider scanner = new ClassPathScanningCandidateComponentProvider(false);
scanner.addIncludeFilter(new AnnotationTypeFilter(Component.class));
for(String packageToScan : packagesToScan) {
for (BeanDefinition bd : scanner.findCandidateComponents(packageToScan)) {
Class clazz = Class.forName(bd.getBeanClassName());
// Use reflection on clazz to write the XML file
}
}
I hope this helps !

Related

JSF application : should I use micro-services and how?

I have a web application developed with JSF 2 and primefaces. The project has been frozen for months, but it's quite advanced, the whole application run inside the same container under glassfish, so it's a monolith.
My application has an user interface and its purpose is to offer them the possibility to organize urls to tutorials (any kinds) as cards, with tags for the classification, into folders. So any user has its own tree, they can make a research inside the other users's tree create a link on a file in their own tree, copy a entire folder, reorganize it etc.
Nowedays we hear a lot about microservices, Spring boot, Angular Js, react etc. I like to develop with JSF it's a great framework, but I'm asking myself about refactoring my application, at least the necessary parts into microservices, and if JSF is appropriate for that or if I should user other tools.
What I like for example with JSF is the facility to create views, its component approach, and how it handle the full cycle of a request.
For example with a simple folder creation form :
I have to choose the parent folder, so I can bind a research component to a backing bean that makes a research indirectly in my DB using a DAO ( in my app an EJB using JPA). That happens at the "invoke application" phase and refresh my form list with ajax at the end. When I submit the form I can also bind a converter to the research component to retrieve directly a Folder object, the converter uses also a DAO to retrieve the object that I need at the "Invoke application" phase to finish the job.
I also use validators to control different attributes of a new folder, usually I declare them inside my entity class (Folder, User ...) with annotations like #NotNull etc. Before I save the folder on my db, I also check the user rights to see if he can write inside the parent folder and so on. I do that inside the backing bean, so at the 'invoke application' phase, and return a faces message if anything happens wrong.
When I read about micro-services I see that you can use them directly inside a form using json for communication, so it seems quite different. For example if I have a micro-service for the CRUD operations of my folders, are the validators, the converters, part of the service or are they stand alone services ? And what about the security checks ? that kind of architecture is quite mysterious to me.
ps : English is not my mother tongue so be indulgent please :)
AngularJs is pretty ancient man :)
You have to look at the pain points to identify ways to tear down your monolith. Monolith pains are usually slow and painful dev cycle and difficult manual test phases. If you did the entire arquillian thing and have full continuouos integration with single button deployments, you've slain the beast the hard way. Not many braved this route. But if you're looking at mounting feature creep with code freezes and manual test cycles then yeah you kind of want to try to pull some of those features out into a service you can redeploy very quickly

How do I use the CakePHP Configure class?

I am trying to use the Configure class in CakePHP, but I'm not sure if I am using it correctly. I have read through the cook book and the API, but I can't seem to do what I want.
I have created a configuration file: app/config/config.php. I can directly edit this file and set variables in there and access them using Configure::read().
Is it possible to update the values of the configuration file from the application itself, i.e., from a controller? I have tried using Configure::write(), but this does not seem to change the value.
app/config/config.php isn't a file that's automatically loaded by Cake. Either move these variables into app/config/bootstrap.php or tell your bootstrap.php file to load your custom file. You could also put your variables in app/config/core.php, but I'd recommend against that. I tend to like leaving that file alone and adding/overwriting values in bootstrap.php.
According to the API, Configure is supposed to be used "for managing runtime configuration information".
You can use its methods to create, read, update and delete (CRUD) configuration variables at runtime. The Configure class is available everywhere in your CakePHP application and therefore CRUD operations performed on its data in any place, including a controller.
If you are looking for persistent storage, you could consider a database (SQL or NoSQL). I would not recommend using a text file, as it raises a lot of security concerns. Even if security is not an issue, a database is propably a more fitting solution.
More on the Configure class in the Cookbook.

How to specify which stax parser to use

I have a woodstox and and java SE 1.6 stax parser in the classpath but woodstox seems to get selected by default.
However in certain cases I'd like to use the default Java stax parser. Is there any way to specify which implementation to use?
Easiest way is to just directly instantiate one you want -- there is no need to use XMLInputFactory.newInstance(); for Woodstox you would instantiate com.ctc.wstx.stax.WstxInputFactory. For Sun implementation it is something else (com.sun.sjsxp or such) -- you can see class name if you instantiate it via Stax API when Woodstox jar is not in classpath.
But if you absolutely want to use indirection, value of system property "javax.xml.stream.XMLInputFactory" is used, as per javadocs: value is the name of class to instantiate.
I had a similar problem, my local jboss has woodstox in the path but the remote server don't (or something is not properly configured). So I chose to instantiate the reference implementation:
// Use BEA streaming parser to avoid runtime exceptions
XMLOutputFactory xmlof = new XMLOutputFactoryBase();

RIA service generated file contains all the proxy code, can this be split?

RIA services generates all the client code in a single file (namespace.g.cs) for each domain service and DTO class. I was wondering whether it is possible to configure it to generate a separate file for each class?
JD.
Ps. The reason I am asking is I was hoping it would be easier to navigate classes using resharper as it is a bit tricky to navigate when all the classes are in one file.
There is no easy way to get it generate separate files. MS chose the easier route when building that generator.
You could always write a little app to split it up into separate class files. The logic is not difficult. If you have to do the splitting often, or for a large team, it would be justifiable to spend half a day building it.
(Might even make one myself, as I have the same problem. If I do make one I will post it on my website for all to use)

Parse a log4j log file

We have several applications that use log4j for logging. I need to get a log4j parser working so we can combine multiple log files and run automated analysis on them. I'm not looking to reinvent the wheel, so can someone point me to a decent pre-existing parser? I do have the log4j conversion pattern if that helps.
If not, I'll have to roll our own.
I didn't realize that Log4J ships with an XML appender.
Solution was: specify an XML appender in the logging configuration file, include that output XML file as an entity into a well formed XML file, then parse the XML using your favorite technique.
The other methods had the following limitations:
Apache Chainsaw - not automated enough
jdbc - poor performance in a high performance distributed app
You can use OtrosLogViewer with batch processing. You have to:
Define you log format, you can use Log4j pattern layout parser or Log4j XmlLayout
Create java class that implements LogDataParsedListener. Method public void logDataParsed(LogData data, BatchProcessingContext context) will be called on every parsed log event.
Create jar
Run OtrosLogViewer with specifying your log processing jar, LogDataParsedListener implementation and log files.
What you are looking for is called SawMill, or something like it.
Log4j log files aren't really suitable for parsing, they're too complex and unstructured. There are third party tools that can do it, I believe (e.g. Sawmill).
If you need to perform automated, custom analysis of the logs, you should consider logging to a database, and analysing that. JDBC ships with the JdbcAppender which appends all messages to a database of your choice, but it has performance implications, and it's a bit flaky. There are other, similar, alternatives on the interweb, though (like this one).
You -can- use Log4j's Chainsaw V2 to process the various log files and collect them into one table, and either output those events as xml or use Chainsaw's built-in expression-based filtering, searching & colorizing support to slice & dice the logs.
Steps:
- Start Chainsaw V2
- Create a chainsaw configuration file by copying the example configuration file available from the Welcome tab - define one LogFilePatternReceiver 'plugin' entry for each log file that you want to process
- Start Chainsaw with that configuration
- Each log file will end up as a separate tab in the UI
- Pause the chainsaw-log tab and clear the events from that tab
- Create a new tab which aggregates the events from the various tabs by going to the 'view, crate custom expression logpanel' menu item and enter 'level >= DEBUG' in the box. It will create a new tab containing events from all of the tabs with level >= debug (which is why you cleared the chainsaw-log tab).
You can get an overview of the expression syntax used to filter, colorize and search from the tutorial (available from the Help menu).
If you don't want to use Chainsaw, you can do something similar - start a simple app that doesn't log but loads a log4j.xml config file with the 'plugin' entries you defined for the Chainsaw configuration, but also define a FileAppender with an xmllayout - all of the events received by the 'receivers' will be sent to the single appender.

Resources