I'm having a problem, probably is quite simple but I did not find the solution yet.
I'm trying to launch my local GAE server (through Run-configurations of Eclipse) on a specific port (8888 in my case) but it starts only at default port 8080 and after trying with different options ... I'm not lucky
Any ideas?
Run this from the cmd line: mvn help:describe -Dcmd=appengine:devserver -Ddetail - you'll see all the available options for appengine:devserver goal.
The one that you want is:
mvn appengine:devserver -Dappengine.port=8888
The Google Plugin for eclipse (GPE) allows you to specify the port number on the second tab ('Server') in a run configuration.
If you're not using that (which you probably should be) you can configure the port in your pom directly like this:
<plugin>
<groupId>com.google.appengine</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>${gae.version}</version>
<configuration>
<port>8080</port>
<address>0.0.0.0</address>
</configuration>
</plugin>
What have you tried?
Have you tried adding the --port 8888 option to your run configuration?
If you are following the tutorial:
https://cloud.google.com/appengine/docs/standard/java/quickstart
It seems that the documentation has changed:
https://cloud.google.com/appengine/docs/flexible/java/maven
Use <host> instead of <address>
Here is how you bind the host address for Docker:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>1.3.2</version>
<configuration>
<enableJarClasses>false</enableJarClasses>
<port>8080</port>
<host>0.0.0.0</host>
<admin_host>0.0.0.0</admin_host>
</configuration>
</plugin>
Related
In karaf org.apache.karaf.features.cfg file
I have added
featuresRepositories=mvn:org.apache.cxf.karaf/apache-cxf/3.0.8/xml/features
featuresBoot = cxf-jaxws
The cxf feature could fetch and be installed when karaf started with the connection.
But it will fail without connection, how can I pre-install cxf feature?
This is likely far from most optimal solution for this (would love to hear about the better ones) but you could create offline-repository project using karaf-feature-archetype and configure karaf-maven-plugin use something like following configuration:
<plugin>
<groupId>org.apache.karaf.tooling</groupId>
<artifactId>karaf-maven-plugin</artifactId>
<configuration>
<startLevel>50</startLevel>
<aggregateFeatures>true</aggregateFeatures>
<checkDependencyChange>true</checkDependencyChange>
<failOnDependencyChange>false</failOnDependencyChange>
<logDependencyChanges>true</logDependencyChanges>
<overwriteChangedDependencies>true</overwriteChangedDependencies>
</configuration>
<executions>
<execution>
<id>features-add-to-repo</id>
<phase>generate-resources</phase>
<goals>
<goal>features-add-to-repository</goal>
</goals>
<configuration>
<descriptors>
<!-- Feature repository paths -->
<descriptor>mvn:groupId/artifactId/version/xml/features</descriptor>
</descriptors>
<features>
<!-- features and their artifacts + dependencies to add to offline repository-->
<feature>featureName</feature>
<feature>featureName/version</feature>
</features>
<repository>target/offline-repository</repository>
</configuration>
</execution>
</executions>
</plugin>
When packaging the project i.e with command maven clean install (in environment with online access) it'll generate offline-repository under target folder which you can copy to your offline environment and tell karaf to use it by adding it to org.ops4j.pax.url.mvn.defaultRepositories in file org.ops4j.pax.url.mvn.cfg i.e file:${user.home}/offline-repository#snapshots#id=local if its located in home directory.
features.xml itself can be empty this is just to use karaf-maven-plugin not to create an actual feature repository.
Just be careful if you need to create a new version of the offline-repository to replace the old one. If the new version is missing any of the artifacts that are currently installed to karaf it can cause issues when trying to remove/uninstall them.
I have a dropwizard application using Flink to read from Kafka but the application blows up with this exception when I start it:
java -jar my-app.jar server my-config.yaml
[2018-01-04T01:04:24,577Z](main)([]) INFO - FlinkMiniCluster - Stopping
FlinkMiniCluster.
[2018-01-04T01:04:24,591Z](main)([]) WARN - ROOT - unavailable
! com.typesafe.config.ConfigException$UnresolvedSubstitution:
reference.conf # jar:file:/my-app.jar!/reference.conf: 804: Could not
resolve substitution to a value: ${akka.stream.materializer}
at com.typesafe.config.impl.ConfigReference.resolveSubstitutions(ConfigReference.java:108)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.SimpleConfigObject$ResolveModifier.modifyChildMayThrow(SimpleConfigObject.java:379)
at com.typesafe.config.impl.SimpleConfigObject.modifyMayThrow(SimpleConfigObject.java:312)
at com.typesafe.config.impl.SimpleConfigObject.resolveSubstitutions(SimpleConfigObject.java:398)
at com.typesafe.config.impl.ResolveContext.realResolve(ResolveContext.java:179)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:142)
at com.typesafe.config.impl.ResolveContext.resolve(ResolveContext.java:231)
at com.typesafe.config.impl.SimpleConfig.resolveWith(SimpleConfig.java:74)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:64)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:59)
at com.typesafe.config.impl.SimpleConfig.resolve(SimpleConfig.java:37)
at com.typesafe.config.impl.ConfigImpl$1.call(ConfigImpl.java:374)
at com.typesafe.config.impl.ConfigImpl$1.call(ConfigImpl.java:367)
at com.typesafe.config.impl.ConfigImpl$LoaderCache.getOrElseUpdate(ConfigImpl.java:65)
at com.typesafe.config.impl.ConfigImpl.computeCachedConfig(ConfigImpl.java:92)
at com.typesafe.config.impl.ConfigImpl.defaultReference(ConfigImpl.java:367)
at com.typesafe.config.ConfigFactory.defaultReference(ConfigFactory.java:413)
at akka.actor.ActorSystem$Settings.<init>(ActorSystem.scala:307)
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:683)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:245)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:288)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:263)
at akka.actor.ActorSystem$.create(ActorSystem.scala:191)
at org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:106)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.startJobManagerActorSystem(FlinkMiniCluster.scala:300)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.singleActorSystem$lzycompute$1(FlinkMiniCluster.scala:329)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.org$apache$flink$runtime$minicluster$FlinkMiniCluster$$singleActorSystem$1(FlinkMiniCluster.scala:329)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster$$anonfun$1.apply(FlinkMiniCluster.scala:343)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster$$anonfun$1.apply(FlinkMiniCluster.scala:341)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:341)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.start(FlinkMiniCluster.scala:323)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:107) ...
My Flink stream is pretty basic:
environment
.addSource(new FlinkKafkaConsumer010<>(...)
.name("source name 1")
.union(environment.addSource(new FlinkKafkaConsumer010<>(...)
.name("source name 2"))
.map(new MyMapFunction())
.addSink(new PrintSinkFunction<>())
.name("Sink: Print");
Strangely enough, the application runs just fine and successfully creates a FlinkMiniCluster when debugging in IDEA.
I'm using flink 1.4 and did not start a flink job manager when running from IDEA or command line.
Is there a configuration I need to be setting up to run from the command line?
FYI - I determined that the akka dependencies from Flink were not being recognized at runtime so I manually added them to my application's pom:
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-actor_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-protobuf_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-stream_2.11</artifactId>
<version>${akka.version}</version>
</dependency>
As CPS said, the refernce.conf file in shaded JAR has to be a merged file of separated akka modules' reference.conf. To get it, the following shade transformer has to be added to the shade configuration :
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
...
</artifactSet>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>reference.conf</resource>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
I experimented a small constraint. I also use the ManifestResourceTransformer to set the mainClass and I had to order the two transformers with ManifestResourceTransformer first and AppendingTransformer in second position (otherwise ManifestResourceTransformer modifies the entry method of the AppendingTransformer).
This answer to a similar question solves the issue. This has nothing to do with Flink, but has to do with how akka configuration is handled. From their documentation:
Akka’s configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.
Using the shade plugin to make the jar solves the problem.
It might be caused by forgetting about using build-jar build profile in
mvn clean install -Pbuild-jar
as documented in the flink documentation
I updated my pom.xml to use the new mvn appengine plugin
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>1.2.0</version>
<configuration>
<project>{project_id}</project>
<devserver.host>0.0.0.0</devserver.host>
<devserver.port>1984</devserver.port>
</configuration>
</plugin>
Now when I run mvn appengine:deploy it converts my queue.xml to queue.yaml in the staging directory. However this queue configuration is not deployed.
I have tried so many ways to deploy it to google cloud but nothing worked. This setup is for my cloud endpoints project setup. The documentations do not cover this.
This is the maven plugin code i added after trying your suggestion out .
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>1.2.0</version>
<configuration>
<project>{project_id}</project>
<devserver.host>0.0.0.0</devserver.host>
<devserver.port>1984</devserver.port>
</configuration>
</plugin>
I opened a similar issue on the project board
By default, only the app.yaml file is deployed (which represents the application).
If you want (in addition, or only) the queue.yaml, or even the cron or index, you need to specify those files inside the plugin configuration.
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>${appengine.maven.plugin.version}</version>
<configuration>
<deployables>
<param>target/appengine-staging/app.yaml</param>
<param>target/appengine-staging/cron.yaml</param>
<param>target/appengine-staging/queue.yaml</param>
<param>target/appengine-staging/index.yaml</param>
</deployables>
</configuration>
</plugin>
Please remember that if you specificy certain files, the app.yaml files should be added as well. It is deployed by default only if the deployabels parameter is missing.
Playing with this parameter you can choose which files to deploy
Since Intellij Idea IDE gae deployment plugin does not work, I have to use mvn appengine:update. It always deploy to version 1, ignoring version in appengine-web.xml.
How to set version with mvn appengine:update deployment?
Another way is, don't add anything on the app engine plugin as it is hard to changes each time the pom.xml better pass the version information from the command line, like this
mvn clean package appengine:deploy -Dapp.deploy.version=your-version-here
reference document here
You can set it via a Maven property:
<properties>
<appengine.appId>my-application-id</appengine.appId>
<appengine.version>my-application-version</appengine.version>
</properties>
PS: I'm also setting the applicationId here, you don't necessarily need that.
Add the following into the plugins section in the project pom.xml file:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>2.2.0</version>
<configuration>
<deploy.projectId>java</deploy.projectId>
<deploy.version>1</deploy.version>
</configuration>
</plugin>
Set the version in plugin property
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>appengine-maven-plugin</artifactId>
<version>1.3.2</version>
<configuration>
<version>2</version>
</configuration>
</plugin>
I am using the maven-pmd-plugin on my project and this is how I have configured it
<reporting>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jxr-plugin</artifactId>
<version>2.3</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>2.6</version>
<configuration>
<linkXref>true</linkXref>
<sourceEncoding>UTF-8</sourceEncoding>
<minimumTokens>100</minimumTokens>
<targetJdk>${targetJdk}</targetJdk>
<rulesets>
<ruleset>${maven.pmd.rulesetfiles}</ruleset>
</rulesets>
</configuration>
</plugin>
</plugins>
</reporting>
Here are the properties used in the above configuration
<properties>
<spring.version>3.0.6.RELEASE</spring.version>
<basedir>C:\Users\Q4\workspace\project</basedir>
<maven.pmd.rulesetfiles>${basedir}\pmdRuleset.xml</maven.pmd.rulesetfiles>
<targetJdk>1.5</targetJdk>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
The problem is when I run mvn pmd:check, it gives me 8 violations -- only from the basic, unusedcode and imports. It simply doesn't use all the rules that I have listed in the custom ruleset file. I have even tried using the logging-java.xml and strings.xml directly in the ruleset without using the custom ruleset file and it still doesn't work.
When i run mvn pmd:pmd, i get a BUILD SUCCESS but the errors still show up in my target folder. Why do I get a build success here?
I solved this by simply adding the plugins in the build section along with the ones in the reporting section.
Somehow it needed to be in the as well to be able to run all the rulesets. Earlier I was under the impression that we put plugins in the build only if we want to run them during the build and deploy phase.