Unable to create repository inside Camel K container - apache-camel

I have camel-k installed in a kubernetes cluster and in that there is a route to checkout a git repository as follow.
// camel-k: language=java dependency=camel-git
import org.apache.camel.builder.RouteBuilder;
public class Gitlab extends RouteBuilder {
#Override
public void configure() throws Exception {
from("git:///tmp/config-service?remotePath=https://xxx&branchName=master&type=branch&username=xxx&password=xxx")
.transform().constant("checked-out").to("log:info");
}
}
But this gives the below permission error. When logged into the pod and tried to create a file or folder inside the deployment directory its giving a permission error. Is there any way to set the User inside the container with Traits or any other method?
2020-10-06 11:31:46.667 ERROR [main] SystemReader - Creating XDG_CONFIG_HOME directory /deployments/?/.config failed
java.nio.file.AccessDeniedException: /deployments/?
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389) ~[?:?]
at java.nio.file.Files.createDirectory(Files.java:689) ~[?:?]
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:796) ~[?:?]
at java.nio.file.Files.createDirectories(Files.java:782) ~[?:?]

There are 2 parts in fixing this problem.
Firstly to work around the XDG_CONFIG_HOME problem, you can set a environment variable for the integration like:
kamel run -e XDG_CONFIG_HOME=/tmp/.config Gitlab.java
Second, the camel-git consumer can only do simple tasks on an existing Git repository like listing branches, tags & commits. To clone a repository you need to use the producer like:
to(git:///tmp/config-service?remotePath=https://github.com/apache/camel-k.git&branchName=master&operation=clone")

Related

Flink 1.16 Application Starting Issue from Flink Cluster

I have updated the Flink version to 1.16.0, after that application itself not starting. Getting below error. We are running the flink in application mode since it was working fine in 1.15.
<flink.version>1.16.0</flink.version>
Pipeline file
./bin/flink run-application --target kubernetes-application -Dkubernetes.cluster-id=sqs-signal-ingress-cluster -Dkubernetes.namespace=dev-sqs -Dkubernetes.jobmanager.service-account=flink-service-account -Dkubernetes.container.image=acrccsdev.azurecr.io/ubs-changes-oct21-sqs-signal-ingress:2022-10-21.11_27294_sqs-signal-ingress_ubs-changes-oct21 -Dkubernetes.container.image.pull-secrets=dev-ccs-acr local:///opt/flink/usrlib/signal-ingress-job.jar
Getting the below error
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.flink.connector.kafka.sink.KafkaSink]: Factory method 'kafkaSinkFsmStates' threw exception; nested exception is java.lang.LinkageError: loader constraint violation: loader org.apache.flink.util.ChildFirstClassLoader #10feca44 wants to load class org.apache.kafka.clients.producer.ProducerRecord. A different class with the same name was previously loaded by 'app'. (org.apache.kafka.clients.producer.ProducerRecord is in unnamed module of loader 'app')
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[signal-ingress-job.jar:0.0.1-SNAPSHOT]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[signal-ingress-job.jar:0.0.1-SNAPSHOT]
... 53 common frames omitted
Caused by: java.lang.LinkageError: loader constraint violation: loader org.apache.flink.util.ChildFirstClassLoader #10feca44 wants to load class org.apache.kafka.clients.producer.ProducerRecord. A different class with the same name was previously loaded by 'app'. (org.apache.kafka.clients.producer.ProducerRecord is in unnamed module of loader 'app')
at java.base/java.lang.ClassLoader.defineClass1(Native Method) ~[na:na]
at java.base/java.lang.ClassLoader.defineClass(Unknown Source) ~[na:na]
How can I run the Flink job after updating the 1.16 version

flink 1.12.1 example application failing on a single node yarn cluster

I am trying out flink example as explained in flink docs in a single node yarn cluster.
As mentioned in this discussion HADOOP_CONF_DIR is also set like below before executing the yarn command.
export HADOOP_CONF_DIR=/etc/hadoop/conf
On executing the below command
ubuntu#vrni-platform:~/build-target/flink$ ./bin/flink run-application -t yarn-application ./examples/streaming/TopSpeedWindowing.jar
It is failing with the below errors
The program finished with the following exception:
org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster
at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:465)
at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1061)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1136)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1136)
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment.
Diagnostics from YARN: Application application_1614159836384_0045 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1614159836384_0045_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2021-02-24 16:19:39.409]File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I have made the log level DEBUG and I do see that flink-dist_2.12-1.12.1.jar is getting copied to /home/ubuntu/.flink/application_1614159836384_0045.
2021-02-24 16:19:37,768 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Got modification time 1614183577000 from remote path file:/home/ubuntu/.flink/application_1614159836384_0045/TopSpeedWindowing.jar
2021-02-24 16:19:37,769 DEBUG org.apache.flink.yarn.YarnApplicationFileUploader [] - Copying from file:/home/ubuntu/build-target/flink/lib/flink-dist_2.12-1.12.1.jar to file:/home/ubuntu/.flink/application_1614159836384_0045/flink-dist_2.12-1.12.1.jar with replication factor 1
I have placed the entire DEBUG logs here.
Nodemanger logs have warnings like below
2021-02-24 16:36:34,219 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1614159836384_0047
2021-02-24 16:36:34,220 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1614159836384_0047_01_000001
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,222 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user ubuntu
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/nmPrivate/container_1614159836384_0047_01_000001.tokens to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047/container_1614159836384_0047_01_000001.tokens
2021-02-24 16:36:34,224 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047 = file:/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ubuntu/appcache/application_1614159836384_0047
2021-02-24 16:36:34,247 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer: Disk Validator: yarn.nodemanager.disk-validator is loaded.
2021-02-24 16:36:34,268 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: { file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar, 1614184593000, FILE, null } failed: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
java.io.FileNotFoundException: File file:/home/ubuntu/.flink/application_1614159836384_0047/flink-dist_2.12-1.12.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:269)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The entire nodemanger logs are here.
Can someone let me know what is going wrong? Does flink not support single node yarn cluster for development?
Flink Version 1.12.1
There was a configuration issue in my setup. In my setup hadoop-yarn-nodemenager is running with yarn user.
ubuntu#vrni-platform:/tmp/flink$ ps -ef | grep nodemanager
yarn 4953 1 2 05:53 ? 00:11:26 /usr/lib/jvm/java-8-openjdk/bin/java -Dproc_nodemanager -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/heap-dumps/yarn -XX:+ExitOnOutOfMemoryError -Dyarn.log.dir=/var/log/hadoop-yarn -Dyarn.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dyarn.home.dir=/usr/lib/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=/usr/lib/hadoop/lib/native -Xmx512m -Dhadoop.log.dir=/var/log/hadoop-yarn -Dhadoop.log.file=hadoop-yarn-nodemanager-vrni-platform.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=yarn -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
I was executing the ./bin/flink command as ubuntu user and yarn user does not have permission to write to ubuntu's home folder in my setup.
ubuntu#vrni-platform:/tmp/flink$ echo ~ubuntu
/home/ubuntu
ubuntu#vrni-platform:/tmp/flink$ echo ~yarn
/var/lib/hadoop-yarn
It appears flink needs permission to write to user's home directory to create a .flink folder even when the job is submitted in yarn. It is working fine for me if I run the flink with yarn user in my setup.

Exception attempting to inject Remote ejb-ref when running multiple tests with Arquillian

I have a number of test classes that are run using Arquillian (1.0.2.Final) using the 'arquillian-glassfish-embedded-3.1' container (1.0.0.CR3).
If I run any of the test classes in isolation they run as expected, if I attempt to run multiple test classes (TestSuite) I run into problems injecting EJB's into the classes.
java.lang.RuntimeException: Could not inject members
Caused by: java.lang.IllegalStateException: Exception attempting to inject Remote ejb-ref name=PackageManagerBean,Remote 3.x interface =com.dcp.pkg.PackageManager resolved to intra-app EJB PackageManagerBean in module test,ejb-link=PackageManagerBean,lookup=,mappedName=,jndi-name=PackageManagerBean,refType=Session into class com.dcp.transmission.TransmissionManagerBeanTest: Lookup failed for 'java:comp/env/PackageManagerBean' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming}
Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Remote ejb-ref name=PackageManagerBean,Remote 3.x interface =com.dcp.pkg.PackageManager resolved to intra-app EJB PackageManagerBean in module test,ejb-link=PackageManagerBean,lookup=,mappedName=,jndi-name=PackageManagerBean,refType=Session into class com.dcp.transmission.TransmissionManagerBeanTest: Lookup failed for 'java:comp/env/PackageManagerBean' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming}
Caused by: javax.naming.NamingException: Lookup failed for 'java:comp/env/PackageManagerBean' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} [Root exception is javax.naming.NamingException: Exception resolving Ejb for 'Remote ejb-ref name=PackageManagerBean,Remote 3.x interface =com.dcp.pkg.PackageManager resolved to intra-app EJB PackageManagerBean in module test,ejb-link=PackageManagerBean,lookup=,mappedName=,jndi-name=PackageManagerBean,refType=Session' . Actual (possibly internal) Remote JNDI name used for lookup is 'PackageManagerBean#com.dcp.pkg.PackageManager' [Root exception is javax.naming.NamingException: Lookup failed for 'PackageManagerBean#com.dcp.pkg.PackageManager' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacecom.dcp.pkg.PackageManager [Root exception is java.lang.IllegalArgumentException: argument type mismatch]]]]
Caused by: javax.naming.NamingException: Exception resolving Ejb for 'Remote ejb-ref name=PackageManagerBean,Remote 3.x interface =com.dcp.pkg.PackageManager resolved to intra-app EJB PackageManagerBean in module test,ejb-link=PackageManagerBean,lookup=,mappedName=,jndi-name=PackageManagerBean,refType=Session' . Actual (possibly internal) Remote JNDI name used for lookup is 'PackageManagerBean#com.dcp.pkg.PackageManager' [Root exception is javax.naming.NamingException: Lookup failed for 'PackageManagerBean#com.dcp.pkg.PackageManager' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacecom.dcp.pkg.PackageManager [Root exception is java.lang.IllegalArgumentException: argument type mismatch]]]
Caused by: javax.naming.NamingException: Lookup failed for 'PackageManagerBean#com.dcp.pkg.PackageManager' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} [Root exception is javax.naming.NamingException: ejb ref resolution error for remote business interfacecom.dcp.pkg.PackageManager [Root exception is java.lang.IllegalArgumentException: argument type mismatch]]
Caused by: javax.naming.NamingException: ejb ref resolution error for remote business interfacecom.dcp.pkg.PackageManager [Root exception is java.lang.IllegalArgumentException: argument type mismatch]
Caused by: java.lang.IllegalArgumentException: argument type mismatch
The Package Manager Bean is defined as follows:
#Stateless(mappedName = "PackageManagerBean")
#Remote({ PackageManager.class })
#Local({ PackageManagerLocal.class })
public class PackageManagerBean implements PackageManager {
}
The Package Manager is injected into several of the test classes as per the example below:
#RunWith(Arquillian.class)
public class TransmissionManagerBeanTest {
#Deployment
public static Archive<?> createDeployment() {
WebArchive war = ShrinkWrap.create(WebArchive.class, "test.war")
.addPackages(true, TransmissionManager.class.getPackage(), Search.class.getPackage(), PackageManager.class.getPackage(), SiteManagerBean.class.getPackage())
.addAsResource("test-persistence.xml", "META-INF/persistence.xml").addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");
return war;
}
#EJB
TransmissionManager transmissionManager;
#EJB
PackageManager packageManager;
#EJB
SiteManager siteManager;
#PersistenceContext
EntityManager entityManager;
#Inject
UserTransaction userTransaction;
}
I do not appear to be having problems with any of the other EJB's.
Does any one have any idea what the problem is and how I can get this working?
From what I can tell it appears to be an issue with cleaning up the resources deployed in the embedded Glassfish container between each test class. At this point I don't know if it is Arquillian, the embedded Glassfish container or my code that causes the problem.
I have found a workaround. By running in each test class in it's own JVM I avoid the issues above. It does add som additional overhead e.g. instead of re-using the embedded container for all of the test classes the container get's torn down and re-started for each test class but it does allow me to run all of my unit tests in one go.
I use maven to run the tests and use the following maven-surefire-plugin configuration to ensure each test class is run in it's own JVM:
<configuration><forkCount>1</forkCount><reuseForks>false</reuseForks>....</configuration>
For versions of maven-surefire-plugin older than 2.14 you can use forkMode=false

Tomcat 6 customized Realm

I'm trying to write my own Realm to authenticate my users.
I have written a class extending org.apache.catalina.realm.RealmBase, compiled to a .jar file and put it in the /lib library.
Then I added this to server.xml:
<Realm className="wstest.tomcat.security.MyRealm"
resourceName="myrealm"/>
Tomcat doesn't seem to "see" my new jar...
When I start Tomcat I get:
ERROR main org.apache.commons.digester.Digester - Begin event threw exception
java.lang.ClassNotFoundException: wstest.tomcat.security.MyRealm
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
at org.apache.tomcat.util.digester.ObjectCreateRule.begin(ObjectCreateRule.java:205)
at org.apache.tomcat.util.digester.Rule.begin(Rule.java:153)
at org.apache.tomcat.util.digester.Digester.startElement(Digester.java:1356)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501)
at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1343)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2755)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
at org.apache.tomcat.util.digester.Digester.parse(Digester.java:1642)
at org.apache.catalina.startup.Catalina.load(Catalina.java:526)
at org.apache.catalina.startup.Catalina.load(Catalina.java:560)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
Which /lib folder have you used? The one in your WebProject or the one in tomcat?
I'm a tomcat greenhorn myself but as far as I know you should place your JAR in the latter one.
server.xml is loaded before the webapps. Therefore tomcat needs to find your Realm implementation class before it gets to load your application JAR files. Place the JAR in tomcat/lib and that should fix the problem.
You also need to be aware of one gotcha: don't log anything in the Realm constructor. The containerLog field is not set in there so you'll get a nasty NPE. loadInternal() sets the containerLog.

Integration tests failing in Grails & App Engine

I am using Grails with the App Engine plugin and JPA persistence. When running
grails test-app
my unit tests run perfectly, but I receive the error below when the integration tests start.
Is this a known issue with the app-engine plugin?
Starting integration tests ...
[copy] Copying 1 file to /home/matthew/.grails/1.1.1/projects/test-gae-jpa
[copy] Copying 1 file to /home/matthew/.grails/1.1.1/projects/test-gae-jpa
Error executing script TestApp: null
java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at AppEngineGrailsPlugin$_closure1.class$(AppEngineGrailsPlugin.groovy)
at AppEngineGrailsPlugin$_closure1.$get$$class$org$grails$appengine$AppEngineEntityManagerFactory(AppEngineGrailsPlugin.groovy)
at AppEngineGrailsPlugin$_closure1.doCall(AppEngineGrailsPlugin.groovy:70)
at AppEngineGrailsPlugin$_closure1.doCall(AppEngineGrailsPlugin.groovy)
at grails.spring.BeanBuilder.invokeBeanDefiningClosure(BeanBuilder.java:651)
at grails.spring.BeanBuilder.beans(BeanBuilder.java:501)
at grails.spring.BeanBuilder.invokeMethod(BeanBuilder.java:447)
at _GrailsBootstrap_groovy$_run_closure2_closure13.doCall(_GrailsBootstrap_groovy:86)
at _GrailsBootstrap_groovy$_run_closure2_closure13.doCall(_GrailsBootstrap_groovy)
at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:274)
at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy)
at _GrailsBootstrap_groovy$_run_closure2.doCall(_GrailsBootstrap_groovy:84)
at _GrailsBootstrap_groovy$_run_closure7.doCall(_GrailsBootstrap_groovy:142)
at _GrailsTest_groovy$_run_closure7.doCall(_GrailsTest_groovy:249)
at _GrailsTest_groovy$_run_closure7.doCall(_GrailsTest_groovy)
at _GrailsTest_groovy$_run_closure1_closure19.doCall(_GrailsTest_groovy:110)
at _GrailsTest_groovy$_run_closure1.doCall(_GrailsTest_groovy:96)
at TestApp$_run_closure1.doCall(TestApp.groovy:66)
at gant.Gant$_dispatch_closure4.doCall(Gant.groovy:324)
at gant.Gant$_dispatch_closure6.doCall(Gant.groovy:334)
at gant.Gant$_dispatch_closure6.doCall(Gant.groovy)
at gant.Gant.withBuildListeners(Gant.groovy:344)
at gant.Gant.this$2$withBuildListeners(Gant.groovy)
at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source)
at gant.Gant.dispatch(Gant.groovy:334)
at gant.Gant.this$2$dispatch(Gant.groovy)
at gant.Gant.invokeMethod(Gant.groovy)
at gant.Gant.processTargets(Gant.groovy:495)
at gant.Gant.processTargets(Gant.groovy:480)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:90)
Caused by: java.lang.NullPointerException
at org.datanucleus.jpa.EntityManagerFactoryImpl.initialisePMF(EntityManagerFactoryImpl.java:452)
at org.datanucleus.jpa.EntityManagerFactoryImpl.<init>(EntityManagerFactoryImpl.java:355)
at org.datanucleus.store.appengine.jpa.DatastoreEntityManagerFactory.<init>(DatastoreEntityManagerFactory.java:63)
at org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider.createEntityManagerFactory(DatastorePersistenceProvider.java:35)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:51)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:33)
at javax.persistence.Persistence$createEntityManagerFactory.call(Unknown Source)
at org.grails.appengine.AppEngineEntityManagerFactory.<clinit>(AppEngineEntityManagerFactory.groovy:13)
... 32 more
Process finished with exit code 1

Resources