OpenJPA exception in ServiceMix environment - apache-camel

ServiceMix, Camel, CXF
I wrote some database manipulate procedure what use JPA. These are not so complex. They work correctly, and I wrote a process, what use some of them.
The process "starter" is a file-based Camel route, and everything is perfect.
Ok, lets the starter a WebService. No problem, we use ServiceMix, lets do it with CXF. It is simple and we have an integrated environment, what could be the problem.
WS ready, call the procedure and... I'v got an exception:
<openjpa-2.3.0-r422266:1540826 nonfatal user error>
org.apache.openjpa.persistence.ArgumentException: An error occurred while parsing the query filter
"select i from IntegratedSystem i where i.code = :value".
Error message: The name "IntegratedSystem" is not a recognized entity or identifier.
Perhaps you meant IntegratedSystem, which is a close match.
Known entity names: [Category, EsbLog, Message, MsgDispatcherCfg,
ConsumerRequest, ProviderResponse, ServiceRegistry, ConsumerResponse,
IntegratedSystem, ProviderRequest, CategoryItem]
It is very interesting, because the excepted entity and the close match is same.
The question:
If I call a procedure from Camel route then JPA work correctly, if I call from WS implementation, the JPA doesn't know the entity. Do you have any idea?
(The WS and the Camel route is the same project (and same package), and if I replaced the JPA select with native select then it works correctly. This is not good solution, because I use more then one selection, and I use the potential of the JPA.)
Thank you!
Feri

I had similar issue, and the cause was openjpa enhancer plugin within maven could not enhance the object as it could not find persistence.xml
and required change as:
<enforcePropertyRestrictions>true</enforcePropertyRestrictions>
<persistenceXmlFile>${project.basedir}/src/main/resources/META-INF/persistence.xml</persistenceXmlFile>
<classes>${project.build.outputDirectory}</classes>
<workDir>${project.build.directory}\openjpa-work</workDir>
Full plugin description:
<plugin>
<groupId>org.apache.openjpa</groupId>
<artifactId>openjpa-maven-plugin</artifactId>
<version>2.3.0</version>
<configuration>
<includes>**/model/**/*.class</includes>
<addDefaultConstructor>true</addDefaultConstructor>
<enforcePropertyRestrictions>true</enforcePropertyRestrictions>
<persistenceXmlFile>${project.basedir}/src/main/resources/META-INF/persistence.xml</persistenceXmlFile>
<classes>${project.build.outputDirectory}</classes>
<workDir>${project.build.directory}\openjpa-work</workDir>
</configuration>
<executions>
<execution>
<id>enhancer</id>
<phase>process-classes</phase>
<goals>
<goal>enhance</goal>
</goals>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.apache.openjpa</groupId>
<artifactId>openjpa</artifactId>
<version>${openjpa.version}</version>
</dependency>
</dependencies>
</plugin>
Hope it helps someone !!

Related

Apache Karaf feature offline issue

In karaf org.apache.karaf.features.cfg file
I have added
featuresRepositories=mvn:org.apache.cxf.karaf/apache-cxf/3.0.8/xml/features
featuresBoot = cxf-jaxws
The cxf feature could fetch and be installed when karaf started with the connection.
But it will fail without connection, how can I pre-install cxf feature?
This is likely far from most optimal solution for this (would love to hear about the better ones) but you could create offline-repository project using karaf-feature-archetype and configure karaf-maven-plugin use something like following configuration:
<plugin>
<groupId>org.apache.karaf.tooling</groupId>
<artifactId>karaf-maven-plugin</artifactId>
<configuration>
<startLevel>50</startLevel>
<aggregateFeatures>true</aggregateFeatures>
<checkDependencyChange>true</checkDependencyChange>
<failOnDependencyChange>false</failOnDependencyChange>
<logDependencyChanges>true</logDependencyChanges>
<overwriteChangedDependencies>true</overwriteChangedDependencies>
</configuration>
<executions>
<execution>
<id>features-add-to-repo</id>
<phase>generate-resources</phase>
<goals>
<goal>features-add-to-repository</goal>
</goals>
<configuration>
<descriptors>
<!-- Feature repository paths -->
<descriptor>mvn:groupId/artifactId/version/xml/features</descriptor>
</descriptors>
<features>
<!-- features and their artifacts + dependencies to add to offline repository-->
<feature>featureName</feature>
<feature>featureName/version</feature>
</features>
<repository>target/offline-repository</repository>
</configuration>
</execution>
</executions>
</plugin>
When packaging the project i.e with command maven clean install (in environment with online access) it'll generate offline-repository under target folder which you can copy to your offline environment and tell karaf to use it by adding it to org.ops4j.pax.url.mvn.defaultRepositories in file org.ops4j.pax.url.mvn.cfg i.e file:${user.home}/offline-repository#snapshots#id=local if its located in home directory.
features.xml itself can be empty this is just to use karaf-maven-plugin not to create an actual feature repository.
Just be careful if you need to create a new version of the offline-repository to replace the old one. If the new version is missing any of the artifacts that are currently installed to karaf it can cause issues when trying to remove/uninstall them.

Maven cxf plugin logging

I'm using the Apache cxf maven plugin (v3.3.0) to successfully to generate java wrappers.
However, the output from the maven build contains thousands of DEBUG logging lines from the wsdl2java which I am unable to remove. Is there an extraarg or other way to silence the process so I get just a success (or possibly failure) message?
<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
<version>${cxf.version}</version>
<executions>
<execution>
<id>generate-sources</id>
<phase>generate-sources</phase>
<configuration>
<sourceRoot>${project.build.directory}/generated-sources/cxf</sourceRoot>
<defaultOptions>
<autoNameResolution>true</autoNameResolution>
</defaultOptions>
<wsdlOptions>
<!--Some Web Service -->
<wsdlOption>
<wsdl>https://some/web/service.wsdl</wsdl>
<extraargs>
<extraarg>-client</extraarg>
<extraarg>-quiet</extraarg>
<extraarg>-p</extraarg>
<extraarg>com.foo.bar</extraarg>
</extraargs>
</wsdlOption>
</wsdlOptions>
</configuration>
<goals>
<goal>wsdl2java</goal>
</goals>
</execution>
</executions>
</plugin>
It appears that under Java 9+ the plugin forces code generation in a forked JVM regardless of the default being documented as false and regardless of any explicit configuration of this option. The plugin execution doesn't see any logging configuration from the project. CXF is logging using java.util.logging and any log down to FINER severity gets printed to the console.
I solved this by providing an explicit path to a logging configuration file to the forked JVM using the plugin's additionalJvmArgs configuration option:
<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
<version>${cxf-plugin.version}</version>
<configuration>
<additionalJvmArgs>-Dlogback.configurationFile=${project.basedir}/src/test/resources/logback-codegen.xml</additionalJvmArgs>
</configuration>
</plugin>
The system property for Logback (as in my case) is logback.configurationFile. For Log4j that would be log4j.configurationFile.
In the logging configuration file the following loggers can be added (Logback):
<!-- entries below silence excessive logging from cxf-codegen-plugin -->
<logger name="org.apache.cxf" level="info"/>
<logger name="org.apache.velocity" level="info"/>
This way the plugin execution will still print to the console all warnings and errors, but all the repetitive debug information goes away. The drawback is that you need to have such a logging configuration file visible in each of your projects. But then, you probably should have one anyway. The same one as for (unit) tests can often be used.
Sorry for not getting into the root solutions, but adding this in my dependencies help:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
<scope>provided</scope>
</dependency>
or put following in plugin execution helps too except for velocity logs
<additionalJvmArgs>
-Dorg.apache.cxf.Logger=null
</additionalJvmArgs>
Hope this would give a hints for someone to come out with a much proper solution.
Using maven-3.8.4 and cxf-codegen-plugin:3.5.1:wsdl2java, I've reduced log level with:
<configuration>
<additionalJvmArgs>-D.level=WARN</additionalJvmArgs>
</configuration>
More details here: Apache CXF

reference.conf exception when running flink application

I have a dropwizard application using Flink to read from Kafka but the application blows up with this exception when I start it:
java -jar my-app.jar server my-config.yaml
[2018-01-04T01:04:24,577Z](main)([]) INFO - FlinkMiniCluster - Stopping
FlinkMiniCluster.
[2018-01-04T01:04:24,591Z](main)([]) WARN - ROOT - unavailable
! com.typesafe.config.ConfigException$UnresolvedSubstitution:
reference.conf # jar:file:/my-app.jar!/reference.conf: 804: Could not
resolve substitution to a value: ${akka.stream.materializer}
at com.typesafe.config.impl.ConfigReference.resolveSubstitutions(ConfigReference.java:108)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:231)
at com.typesafe.config.impl.SimpleConfig.resolveWith(SimpleConfig.java:74)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:64)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:59)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:37)
at com.typesafe.config.impl.ConfigImpl$1.call(ConfigImpl.java:374)
at com.typesafe.config.impl.ConfigImpl$1.call(ConfigImpl.java:367)
at com.typesafe.config.impl.ConfigImpl$LoaderCache.getOrElseUpdate(ConfigImpl.java:65)
at com.typesafe.config.impl.ConfigImpl.computeCachedConfig(ConfigImpl.java:92)
at com.typesafe.config.impl.ConfigImpl.defaultReference(ConfigImpl.java:367)
at com.typesafe.config.ConfigFactory.defaultReference(ConfigFactory.java:413)
at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:307)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:683)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:245)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:288)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:263)
at akka.actor.ActorSystem$.create(ActorSystem.scala:191)
at org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:106)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.startJobManagerActorSystem(FlinkMiniCluster.scala:300)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.singleActorSystem$lzycompute$1(FlinkMiniCluster.scala:329)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.org$apache$flink$runtime$minicluster$FlinkMiniCluster$$singleActorSystem$1(FlinkMiniCluster.scala:329)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster$$anonfun$1.apply(FlinkMiniCluster.scala:343)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster$$anonfun$1.apply(FlinkMiniCluster.scala:341)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:341)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:323)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:107) ...
My Flink stream is pretty basic:
environment
.addSource(new FlinkKafkaConsumer010<>(...)
.name("source name 1")
.union(environment.addSource(new FlinkKafkaConsumer010<>(...)
.name("source name 2"))
.map(new MyMapFunction())
.addSink(new PrintSinkFunction<>())
.name("Sink: Print");
Strangely enough, the application runs just fine and successfully creates a FlinkMiniCluster when debugging in IDEA.
I'm using flink 1.4 and did not start a flink job manager when running from IDEA or command line.
Is there a configuration I need to be setting up to run from the command line?
FYI - I determined that the akka dependencies from Flink were not being recognized at runtime so I manually added them to my application's pom:
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-actor_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-protobuf_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-stream_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
As CPS said, the refernce.conf file in shaded JAR has to be a merged file of separated akka modules' reference.conf. To get it, the following shade transformer has to be added to the shade configuration :
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
...
</artifactSet>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>reference.conf</resource>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
I experimented a small constraint. I also use the ManifestResourceTransformer to set the mainClass and I had to order the two transformers with ManifestResourceTransformer first and AppendingTransformer in second position (otherwise ManifestResourceTransformer modifies the entry method of the AppendingTransformer).
This answer to a similar question solves the issue. This has nothing to do with Flink, but has to do with how akka configuration is handled. From their documentation:
Akka’s configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.
Using the shade plugin to make the jar solves the problem.
It might be caused by forgetting about using build-jar build profile in
mvn clean install -Pbuild-jar
as documented in the flink documentation

How to create and drop database in DB2 using maven script

I am trying to run maven script and drop and create the databases.
This is my xml
.....
<plugin>
<!-- Used to automatically drop (if any) and create a database prior
to running integration test cases. -->
<groupId>org.codehaus.mojo</groupId>
<artifactId>sql-maven-plugin</artifactId>
<version>1.5</version>
<dependencies>
<!-- <dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.22</version>
</dependency> -->
<dependency>
<groupId>db2.connector</groupId>
<artifactId>db2.connector</artifactId>
<version>10.5.0.1</version>
<scope>system</scope>
<systemPath>${basedir}/../data/target/db2jcc.jar</systemPath>
</dependency>
<dependency>
<groupId>db2.connector</groupId>
<artifactId>db2.connector4</artifactId>
<version>10.5.0.1</version>
<scope>system</scope>
<systemPath>${basedir}/../data/target/db2jcc4.jar</systemPath>
</dependency>
</dependencies>
<configuration>
<!-- common configuration shared by all executions -->
<driver>com.ibm.db2.jcc.DB2Driver</driver>
<username>db2inst1</username>
<password>password</password>
<url>jdbc:db2://192.168.0.81:50000/db</url>
<forceMojoExecution>true</forceMojoExecution>
</configuration>
<executions>
<execution>
<id>drop-db-before-test-if-any</id>
<phase>pre-integration-test</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<autocommit>true</autocommit>
<sqlCommand>drop database db</sqlCommand>
<onError>continue</onError>
</configuration>
</execution>
<execution>
<id>create-db</id>
<phase>pre-integration-test</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<autocommit>true</autocommit>
<sqlCommand>create database db</sqlCommand>
</configuration>
</execution>
<execution>
<id>create-schema</id>
<phase>pre-integration-test</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<url>jdbc:db2://192.168.0.81:50000/sample</url>
<autocommit>true</autocommit>
<srcFiles>
<srcFile>${basedir}/../data/src/main/resources/create_database_db2.sql</srcFile>
</srcFiles>
</configuration>
</execution>
</executions>
</plugin>
.....
The error I get is
[ERROR] com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-104,
SQLSTATE=42601, SQLERRMC=database;drop ;<program_or_package>, DRIVER=3.66.46
[INFO] 0 of 1 SQL statements executed successfully
Similar error for create db.
What am I doing wrong in this?
I'm not a Maven expert, but the issue you're running into is that you're trying to run DROP DATABASE as a SQL command, when it is a DB2 Command-Line processor command.
This thread might be of some help, it's about running DB2 CLP commands in Java.
As Bhamby said, you cannot execute a "create database", nor a "drop database" as a SQL command. That is the reason you are having those kind of errors.
Command script via exec-maven-plugin
What you can do is to execute a command from Maven, but before, you have to be sure that the DB2 environment is correctly loaded. You can use the exec-maven-plugin plugin, but instead of executing different commands to load the DB2 profile and then creating the database, what you can do is to write a script that will receive the database name as parameter, and the script will create the database. The problem here, is that you have to write one for Linux, and one for Windows. For example in linux:
create.sh
#!/bin/bash
. /home/db2inst1/sqllib/db2profile
db2 create db $1
The instance home directory here was: /home/db2inst1.
Also, you have to be sure, the user used to execute Maven has the necessary rights at the instance to create a new database. I mean, the user should be in the sysadm or sysctrl group: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0001941.html
Java
You cannot create the database via Java, because the DB2 API does not provide a Java API for this. Instead, you can create a C routine called from JNI and invoke it in Java. In this way, you can personalize the creation/dropping process from Java, and not from a script.

What is the simplest way to aggregate/assemble multiple (js) files into one (js) file with a maven plugin WITHOUT compression?

I would like to aggregate / assemble multiple js files into one without minifying or obfuscating them using a maven plugin.
I am already using a yui plugin to obfuscate some js files into one:
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>yuicompressor-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>obfuscate</id>
<phase>process-resources</phase>
<goals>
<goal>compress</goal>
</goals>
<configuration>
<nosuffix>true</nosuffix>
<linebreakpos>-1</linebreakpos>
<aggregations>
<aggregation>
<removeIncluded>true</removeIncluded>
<insertNewLine>false</insertNewLine>
<output>${project.build.directory}/${project.build.finalName}/all.js</output>
<includes>
<include>**/*.js</include>
</includes>
<excludes>
<exclude>**/include/*.js</exclude>
</excludes>
</aggregation>
</aggregations>
</configuration>
</execution>
</executions>
</plugin>
Now I want the same js files aggregated without minification or obfuscation in a file allForDev.js . The goal is to have one file for development and one for production. Its going to be useful to see the whole scripts when debugging in developer tools. If I don't find a way to do this I'll be forced to place a lot of script tags to load all those scirpts (which is not the end of the world :) but I would like to do it in a cleaner way).
I can see that the assemble plugin has the following formats:
zip tar.gz tar.bz2 jar dir war and any other format that the
ArchiveManager has been configured for
Is there a way I can use the assemble maven plugin to do this? As much as I looked there were a bunch of examples to create zips jars and wars, but none to match what I want to do. Or did I miss something?
Is there another plugin I could use?
As a side note, I tried using a second execution of the yui plugin to create a second js file, but I had no luck in creating 2 files. I also tried providing 2 yui plugins, with no luck again. I think that's not possible either.
Cheers,
Despot
The answer would lie in wro4j library. For a more precise setup see:
Javascript and CSS files combining in Maven build WITHOUT compression, minification etc
If you do not want to hassle with the wro4j plugin as me (from #despot's answer) and want to prototype quickly, you can actualy use the old maven-antrun-plugin with similar configuration as the following one:
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.8</version>
<executions>
<execution>
<phase>generate-sources</phase>
<configuration>
<target>
<property name="root" location=""/>
<property name="jsRoot" location="${root}/src/main/webapp/js"/>
<property name="jsAggregated" location="${root}/src/main/webapp/all.js"/>
<echo message="Aggregating js files from ${jsRoot} into ${jsAggregated}"/>
<concat destfile="${jsAggregated}" encoding="UTF-8" >
<fileset dir="${jsRoot}" includes="*.*"/>
<filelist dir="${jsRoot}/.." files="client.js"/>
</concat>
</target>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
This one concatenates all files in folder <module>/src/main/webapp/js/*.* (but not files from sub folders). Then it adds client.js to the end to make sure all stuff is available for it (AFAIK <fileset> has undefined order).
The resulting concatenated file then resides at <module>/src/main/webapp/all.js.
I know the maven phase, paths and other stuff may not be "correct" - this is just a quick example to show the alternative non-invasive way to do it.

Resources