Problem in compiling LocalSolr - solr

I am trying to install LocalSolr following the instruction here:
http://www.gissearch.com/node/16
Unfortunately things are not working well.
In the document, it says:
To install these into solr simply copy the following jars to solr's lib directory.
located in apache-solr-1.*/example/solr/lib
I have noticed that I don't have a folder called "lib" under example/solr, so I built it myself and copied all the mentioned files to it.
And I went to the next step which was altering solrconfig.xml and schema.xml as instructed in here: http://www.gissearch.com/localsolr
and restarted my ApacheSolr, but got the following error:
org.apache.solr.common.SolrException: Error loading class 'com.pjaol.search.solr.component.LocalSolrQueryComponent'
I can't figure out how to fix this problem so I would appreciate the help.

Solr includes official spatial search functionality as of version 3. Based on my experience running Websolr, I recommend you upgrade to Solr 3.3 rather than wrestle with LocalSolr.

Related

[Apache Flink]: Where is flink-s3-fs-hadoop plugin?

I would like to read and write some data with Apache Flink 1.11.2 from S3. The documentation recommends to use the presto plugin for checkpoints and the hadoop plugin for pipeline data.
After reading this section you have to copy the plugins from /opt to /plugin. I can find the flink-s3-fs-presto-1.11.2.jar under /opt but there is no flink-s3-fs-hadoop-1.11.2.jar. Where can i find the s3-hadoop plugin for setting up my production environment?
And how can i use these plugins in the IDE? Simply adding these to pom.xml als provided dependencies? And then how can i pass the crentials in IDE?
That is weird I can see that they are both present in the official binaries in opt in 1.11.1. However if You can't find them, You can simply try to get the jars from Maven here and copy them to the required place. Another thing that may work is adding the dependency into the project with compile scope.
Running the job locally is described here. There are various ways of configuring the credentials when running the job in IDE, one might be adding core-site.xml to resources folder with proper configruation.
EDIT:
As for the local execution it was explained here a little bit.

Atlassian SDK 404 Error when trying to do atlas-run

I followed the tutorial on
https://developer.atlassian.com/server/framework/atlassian-sdk/create-a-confluence-hello-world-macro/
But when I try to do atlas-run and open localhost:1990/confluence, I get a 404-error.
I already checked my java and sdk version multiple times.
Atlas 6.3.10
Apache Maven 3.2.1
Java jdk.1.8.0_191
Running on win10 home.
Another strange behaviour, all atlas commands won't refer to the changed working directory. All files were created in .../Atlassian/atlassian-plugin-sdk-6.3.10 and in the first lines it says "can't find path"
Any suggestions what I can do?
Finally solved it.
Check solution on https://community.developer.atlassian.com/t/atlas-run-in-tutorial-causes-404-error/24612

Jenkins Jacoco Plugin not linking Groovy source files

Is there a way to configure the Jenkins Jacoco Plugin to link Groovy source files to the coverage report? The coverage statistics are calculated correctly, however, in a mixed Java/Groovy Project, only the Java files are linked. The configuration looks as follows:
Switching to the latest release (3.0.3) I was able to fix that issue. However you still need to manually tell the plugin to check for *.groovy source files, e.g.:
jacoco classPattern: 'build/classes',
execPattern: 'build/jacoco/test.exec',
sourceInclusionPattern: '**/*.groovy', // new option required to tell the plugin to search for *.groovy source files
sourcePattern: 'src,test'
Based upon this bug report, it looks like the 2.2+ releases have changed how source code is linked in the report such that it only works for *.java files. One possible work-around is to downgrade the JaCoCo plugin to 2.1.0. This is what we did and it works; although I am not sure what features and bug fixes we give up in 2.2+ so it might not be worth it in your situation.
It looks like there is a Pull Request that needs to be reviewed and merged so that it can be released in an upcoming version.

install and configure solr in openshift

I am trying to follow this link for setting up solr in Openshift, but I guess the version given in the example is old. the directories given in the documentation are not working properly.
So my question is, How to install Solr in Openshift using tomcat-7 or other Cartridges ?
Without knowing what errors your getting I would recommend taking a look at https://www.openshift.com/blogs/run-your-java-tomcat-application-for-free-on-openshifts-paas to get tomcat up and running. From there to deploy a war file its as simple as:
1) clone your repo
2) remove the pom.xml file from the root directory of your repo
3) add your war to the webapps/ dir of your repo
4) do git add/commit/push
You can use the following QuickStart from LogicalSpark to make things easier to get up and running with Solr in OpenShift:
https://github.com/LogicalSpark/openshift-solr-quickstart

DataNucleus libraries and maven-gae-plugin

I'm using maven-gae-plugin to manage a Google AppEngine project but I don't know how to include the libraries required to use JPA.
Google's documentation says:
The classpath must contain the JARs 'datanucleus-core-*.jar', 'datanucleus-jpa-*', 'datanucleus-enhancer-*.jar', 'asm-*.jar', and 'geronimo-jpa-*.jar' (where * is the appropriate version number of each JAR) from the 'appengine-java-sdk/lib/tools/' directory, as well as all of your data classes.
How can I tell the plugin to put all the jars in the classpath?
So far I just edited the pom.xml file setting gae.version to 1.7.3 (Leaving datanucleus.version to 1.1.5 and I run mvn gae:unpack but I cannot get it to work.
First, I have problems with javax.persistance that is not found. Do I have to add it manually to pom.xml?
If I do it, the development server starts, but I cannot work with the storage: I get the following error:
SEVERE: Found Meta-Data for class com.sharecost.entities.User but this class is not enhanced!! Please enhance the class before running DataNucleus.
I found a solution to the second part of my question. Looking at the POM.xml file I discovered that the all entities are supposed to be in a **/model package.
I still don't know if the manual inclusion of the javax.persistence dependency is actually required.

Resources