I have tried to create a CustomQueryParser where I am making use of OpenNLP libraries as well.
My objective is if i have a query "How many defective rims are causing failure in ABC tyres in China"
I want the final query to be something like "defective rims failure tyres China"
which then would go to the Analyzer for further processing.
This is my code for QueryParserPlugin -
package com.mycompany.lucene.search;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.search.QParser;
import org.apache.solr.search.QParserPlugin;
import com.mycompany.lucene.search.QueryParser;
public class QueryParserPlugin extends QParserPlugin {
#Override
public QParser createParser(String qstr, SolrParams localParams,
SolrParams params, SolrQueryRequest req) {
return new QueryParser(qstr, localParams, params, req, "body_txt_str");
}
}
And the code for my QueryParser -
package com.mycompany.lucene.search;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TermQuery;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.search.QParser;
import org.apache.solr.search.SyntaxError;
import opennlp.tools.postag.POSModel;
import opennlp.tools.postag.POSTaggerME;
import opennlp.tools.tokenize.Tokenizer;
import opennlp.tools.tokenize.TokenizerME;
import opennlp.tools.tokenize.TokenizerModel;
public class QueryParser extends QParser {
private String fieldName;
public QueryParser(String qstr, SolrParams localParams, SolrParams params,
SolrQueryRequest req,
String defaultFieldName) {
super(qstr, localParams, params, req);
fieldName = localParams.get("field");
if (fieldName == null) {
fieldName = params.get("df");
}
}
#Override
public Query parse() throws SyntaxError {
Analyzer analyzer = req.getSchema().getQueryAnalyzer();
InputStream tokenModelIn = null;
InputStream posModelIn = null;
try {
tokenModelIn = new FileInputStream("/Files/en-token.bin");
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
TokenizerModel tokenModel = null;
try {
tokenModel = new TokenizerModel(tokenModelIn);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Tokenizer tokenizer = new TokenizerME(tokenModel);
String tokens[] = tokenizer.tokenize(qstr);
try {
posModelIn = new FileInputStream("/Files/en-pos-maxent.bin");
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// loading the parts-of-speech model from stream
POSModel posModel = null;
try {
posModel = new POSModel(posModelIn);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// initializing the parts-of-speech tagger with model
POSTaggerME posTagger = new POSTaggerME(posModel);
// Tagger tagging the tokens
String tags[] = posTagger.tag(tokens);
String final_query = "";
for(int i=0;i<tokens.length;i++){
if (tags[i]=="JJ" || tags[i]=="NNS" || tags[i]=="NN") {
final_query = final_query + " " +tokens[i];
}
}
TermQuery tq= new TermQuery(new Term(fieldName,final_query));
return tq;
}
}
I then exported this as a jar and added these jars to my solrconfig.xml -
<lib dir="${solr.install.dir:../../../..}/contrib/customparser/lib"
regex=".*\.JAR" />
<lib dir="${solr.install.dir:../../../..}/contrib/analysis-extras/lib"
regex="opennlp-.*\.jar" />
But getting the below error :
Caused by:
java.lang.NoClassDefFoundError: opennlp/tools/tokenize/Tokenizer
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:541)
at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:488)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:786)
at org.apache.solr.core.PluginBag.createPlugin(PluginBag.java:135)
at org.apache.solr.core.PluginBag.init(PluginBag.java:271)
at org.apache.solr.core.PluginBag.init(PluginBag.java:260)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:957)
... 9 more
This is my first time creating a CustomQueryParser, Could you please help me out.
Thanks
most probably your path
${solr.install.dir:../../../..}/contrib/analysis-extras/lib
doesn't contain the relevant opennlp jars or the regex is not appropriate.
that's the first thing to check.
you have to either "bundle" also the opennlp dependencies in your custom query parser jar (e.g. if you use maven to build your project, using maven-assembly-plugin, maven-shade-plugin, etc.) or make sure the opennlp specific jars in the relevant directive in your solrconfig.xml are matched.
Related
How do a checkpoint the processed records in apache flink? same messages are being consumed at regular intervals.
Do I need to explicitly checkpoint each message post consumption?
I can see the eventId and sequenceNumber are matching for multiple messages being consumed.
It seems the checkpointing is not done and so same messages are retrieved from steams at regular intervals.
Here is the code
package com.flink.basics;
import org.apache.flink.api.common.state.ListState;
import org.apache.flink.api.common.state.ListStateDescriptor;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.kinesis.shaded.com.amazonaws.services.dynamodbv2.model.AttributeValue;
import org.apache.flink.kinesis.shaded.com.amazonaws.services.dynamodbv2.model.Record;
import org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.clientlibrary.lib.worker.PreparedCheckpointer;
import org.apache.flink.runtime.state.filesystem.FsStateBackend;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.CheckpointConfig;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.KeyedProcessFunction;
import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
import org.apache.flink.streaming.connectors.kinesis.FlinkDynamoDBStreamsConsumer;
import org.apache.flink.streaming.connectors.kinesis.config.AWSConfigConstants;
import org.apache.flink.streaming.connectors.kinesis.config.ConsumerConfigConstants;
import org.apache.flink.streaming.connectors.kinesis.serialization.DynamoDBStreamsSchema;
import org.apache.flink.util.Collector;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.Properties;
public class DynamoDbConsumer {
public static void main(String[] args) throws Exception {
Properties consumerConfig = new Properties();
consumerConfig.put(AWSConfigConstants.AWS_REGION, "us-east-1");
consumerConfig.put(AWSConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id");
consumerConfig.put(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key");
consumerConfig.put(AWSConfigConstants.AWS_ENDPOINT, "http://localhost:4566");
consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");
System.setProperty("com.amazonaws.sdk.disableCbor", "true");
System.setProperty("org.apache.flink.kinesis.shaded.com.amazonaws.sdk.disableCbor", "true");
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(1000, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
// File based Backend
env.setStateBackend(new FsStateBackend(Paths.get("/Users/polimea/flink-basics/stbackend").toUri(), false));
FlinkDynamoDBStreamsConsumer<Record> flinkConsumer = new FlinkDynamoDBStreamsConsumer<Record>(
Collections.singletonList("arn:aws:dynamodb:us-east-1:000000000000:table/FDXTable/stream/2022-05-24T00:18:12.500"),
new DynamoDBStreamsSchema(), consumerConfig);
DataStream<Record> kinesisDBStream = env.addSource(flinkConsumer);
KeyedStream<Record, String> snapshotKeyedStream = kinesisDBStream.keyBy((KeySelector<Record, String>)
record -> record.getDynamodb().getNewImage().get("SNP").getS());
SingleOutputStreamOperator<Tuple2<String, Record>> records = snapshotKeyedStream.process(new StatefulReduceFunc());
records.print();
records.addSink(new DiscardingSink<>());
snapshotKeyedStream.process(new KeyedProcessFunction<String, Record, Object>() {
#Override
public void processElement(Record record, KeyedProcessFunction<String, Record, Object>.Context context,
Collector<Object> collector) throws Exception {
}
});
// kinesisDBStream.print();
env.execute("Stream for buffering dynamodb records till snapshot is committed");
}
private static class StatefulReduceFunc extends KeyedProcessFunction<String, Record, Tuple2<String, Record>> {
private transient ListState<Record> records;
public void open(Configuration parameters) {
ListStateDescriptor<Record> listStateDescriptor =
new ListStateDescriptor<>("records", Record.class);
records = getRuntimeContext().getListState(listStateDescriptor);
}
#Override
public void processElement(Record record, Context context,
Collector<Tuple2<String, Record>> collector) throws Exception {
Iterable<Record> recordIterator = this.records.get();
AttributeValue snCommitted = record.getDynamodb().getNewImage().get("SNCommitted");
if (snCommitted != null && snCommitted.getBOOL()) {
for (Record recordInList : recordIterator) {
collector.collect(new Tuple2<>(record.getDynamodb().getNewImage().get("SNP").getS(), recordInList));
}
} else {
records.add(record);
}
}
}
}
Not sure if this is related to your issue but the code you provided will buffer the records forever. I think what you want is to emit records and clear the state once commit message comes. Something along those lines
// ...
if (snCommitted != null && snCommitted.getBOOL()) {
var snp = record.getDynamodb().getNewImage().get("SNP").getS();
for (Record recordInList : recordIterator) {
collector.collect(new Tuple2<>(snp, recordInList));
}
// explicitly clear the buffer not to emit same events over and over again
records.clear();
}
// ...
How to write DataSet as Parquet files in s3 bucket using Flink. Is there any direct function like spark : DF.write.parquet("write in parquet")
Please help me how to write flink Dataset in parquet format.
I am stuck when trying to convert my DataSet to (Void,GenericRecord)
DataSet<Tuple2<Void,GenericRecord>> df = allEvents.flatMap(new FlatMapFunction<Tuple2<LongWritable, Text>, Tuple2<Void, GenericRecord>>() {
#Override
public void flatMap(Tuple2<LongWritable, Text> longWritableTextTuple2, Collector<Tuple2<Void, GenericRecord>> collector) throws Exception {
JsonAvroConverter converter = new JsonAvroConverter();
Schema schema = new Schema.Parser().parse(new File("test.avsc"));
try {
GenericRecord record = converter.convertToGenericDataRecord(longWritableTextTuple2.f1.toString().getBytes(), schema);
collector.collect( new Tuple2<Void,GenericRecord>(null,record));
}
catch (Exception e) {
System.out.println("error in converting to avro")
}
}
});
Job job = Job.getInstance();
HadoopOutputFormat parquetFormat = new HadoopOutputFormat<Void, GenericRecord>(new AvroParquetOutputFormat(), job);
FileOutputFormat.setOutputPath(job, new Path(outputPath));
df.output(parquetFormat);
env.execute();
Please help me with what I am doing wrong. I am getting Exception and this
code is not working.
It's a little more complicated than that with Spark. The only way I was able to read and write Parquet data in Flink is through Hadoop & MapReduce compatibility. You need hadoop-mapreduce-client-core and flink-hadoop-compatibility in Your dependencies.
Then You need to create a proper HadoopOutoutFormat. You need to do something like this:
val job = Job.getInstance()
val hadoopOutFormat = new hadoop.mapreduce.HadoopOutputFormat[Void, SomeType](new AvroParquetOutputFormat(), job)
FileOutputFormat.setOutputPath(job, [somePath])
And then You can do:
dataStream.writeUsingOutputFormat(hadoopOutFormat)
You didn't say which exception you are getting but here is a complete example on how to achieve this.
The main points are:
Use org.apache.flink.api.java.hadoop.mapreduce.HadoopOutputFormat
From dependency org.apache.flink:flink-hadoop-compatibility_2.11:1.11.0
HadoopOutputFormat is an adapter that allows you to use output formats developed for Hadoop
You need a DataSet<Tuple2<Void,IndexedRecord>>, because hadoop's OutputFormat<K,V> works with key-value pairs, the key we are not interested in so we use Void for the key type, and the value needs to be an Avro's IndexedRecord or GenericRecord.
Use org.apache.parquet.avro.AvroParquetOutputFormat<IndexedRecord>
From dependency org.apache.parquet:parquet-avro:1.11.1
This hadoop's OutputFormat produces Parquet
This inherits from org.apache.parquet.hadoop.FileOutputFormat<Void, IndexedRecord>
Create your own subclass of IndexedRecord
You can't use new GenericData.Record(schema) because a record like this is no serializable java.io.NotSerializableException: org.apache.avro.Schema$Field is not serializable and Flink requires it to be serializable.
You still need to provide a getSchema() method, but you can either return null or return a Schema that you hold in a static member (so that it doesn't need to be serialized, and you avoid the java.io.NotSerializableException: org.apache.avro.Schema$Field is not serializable)
The source code
import org.apache.avro.Schema;
import org.apache.avro.generic.IndexedRecord;
import org.apache.commons.lang3.NotImplementedException;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.hadoop.mapreduce.HadoopOutputFormat;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.configuration.Configuration;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.parquet.avro.AvroParquetOutputFormat;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import java.io.IOException;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import java.util.stream.Stream;
public class MyParquetTest implements Serializable {
public static void main(String[] args) throws Exception {
new MyParquetTest().start();
}
private void start() throws Exception {
final ExecutionEnvironment env = ExecutionEnvironment.createLocalEnvironment();
Configuration parameters = new Configuration();
Stream<String> stringStream = IntStream.range(1, 100).mapToObj(n -> String.format("Entry %d", n));
DataSet<String> text = env.fromCollection(stringStream.collect(Collectors.toCollection(ArrayList::new)));
Job job = Job.getInstance();
HadoopOutputFormat<Void, IndexedRecord> hadoopOutputFormat = new HadoopOutputFormat<>(new AvroParquetOutputFormat<IndexedRecord>(), job);
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, CompressionCodecName.SNAPPY.getHadoopCompressionCodecClass());
FileOutputFormat.setOutputPath(job, new org.apache.hadoop.fs.Path("./my-parquet"));
final Schema schema = new Schema.Parser().parse(MyRecord.class.getClassLoader().getResourceAsStream("schema.avsc"));
AvroParquetOutputFormat.setSchema(job, schema);
DataSet<Tuple2<Void, IndexedRecord>> text2 = text.map(new MapFunction<String, Tuple2<Void, IndexedRecord>>() {
#Override
public Tuple2<Void, IndexedRecord> map(String value) throws Exception {
return Tuple2.of(null, new MyRecord(value));
// IndexedRecord record = new GenericData.Record(schema); // won't work becuase Schema$Field is not serializable
// record.put(0, value);
// return Tuple2.of(null, record);
}
});
text2.output(hadoopOutputFormat);
env.execute("Flink Batch Java API Skeleton");
}
public static class MyRecord implements IndexedRecord {
private static Schema schema;
static {
try {
schema = new Schema.Parser().parse(MyRecord.class.getClassLoader().getResourceAsStream("schema.avsc"));
} catch (IOException e) {
e.printStackTrace();
}
}
private final String value;
public MyRecord(String value) {
this.value= value;
}
#Override
public void put(int i, Object v) {
throw new NotImplementedException("You can't update this IndexedRecord");
}
#Override
public Object get(int i) {
return this.value;
}
#Override
public Schema getSchema() {
return schema; // or just return null and remove the schema member
}
}
}
The schema.avsc is simply
{
"name": "aa",
"type": "record",
"fields": [
{"name": "value", "type": "string"}
]
}
and the dependencies:
implementation "org.apache.flink:flink-java:${flinkVersion}"
implementation "org.apache.flink:flink-avro:${flinkVersion}"
implementation "org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
implementation "org.apache.flink:flink-hadoop-compatibility_${scalaBinaryVersion}:${flinkVersion}"
implementation "org.apache.parquet:parquet-avro:1.11.1"
implementation "org.apache.hadoop:hadoop-client:2.8.3"
You'll create a Flink OutputFormat via new HadoopOutputFormat(parquetOutputFormat, job), and then pass that to DataSet.output(xxx).
The job comes from...
import org.apache.hadoop.mapreduce.Job;
...
Job job = Job.getInstance();
The parquetOutputFormat is created via:
import org.apache.parquet.hadoop.ParquetOutputFormat;
...
ParquetOutputFormat<MyOutputType> parquetOutputFormat = new ParquetOutputFormat<>();
See https://javadoc.io/doc/org.apache.parquet/parquet-hadoop/1.10.1/org/apache/parquet/hadoop/ParquetOutputFormat.html
I have a flink cep code that reads from socket and detects for a pattern. Lets say the pattern(word) is 'alert'. If the word alert occurs five times or more, an alert should be created. But I am getting an input mismatch error. Flink version is 1.3.0. Thanks in advance !!
package pattern;
import org.apache.flink.cep.CEP;
import org.apache.flink.cep.PatternStream;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.cep.pattern.conditions.IterativeCondition;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
import java.util.List;
import java.util.Map;
public class cep {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> dss = env.socketTextStream("localhost", 3005);
dss.print();
Pattern<String,String> pattern = Pattern.<String> begin("first")
.where(new IterativeCondition<String>() {
#Override
public boolean filter(String word, Context<String> context) throws Exception {
return word.equals("alert");
}
})
.times(5);
PatternStream<String> patternstream = CEP.pattern(dss, pattern);
DataStream<String> alerts = patternstream
.flatSelect((Map<String,List<String>> in, Collector<String> out) -> {
String first = in.get("first").get(0);
for (int i = 0; i < 6; i++ ) {
out.collect(first);
}
});
alerts.print();
env.execute();
}
}
Just some clarification on the original problem. In 1.3.0 there was a bug that made using lambdas as arguments to select/flatSelect impossible.
It was fixed in 1.3.1, so your first version of the code would work with 1.3.1.
Besides I think you misinterpret the times quantifier. It matches exact number of times. So in your case it will return only when event will be matched exactly 3 times, not 3 or more.
So I have got the code to work. Here is the working solution,
package pattern;
import org.apache.flink.cep.CEP;
import org.apache.flink.cep.PatternSelectFunction;
import org.apache.flink.cep.PatternStream;
import org.apache.flink.cep.pattern.Pattern;
import org.apache.flink.cep.pattern.conditions.IterativeCondition;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.util.Collector;
import java.util.List;
import java.util.Map;
public class cep {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> dss = env.socketTextStream("localhost", 3005);
dss.print();
Pattern<String,String> pattern = Pattern.<String> begin("first")
.where(new IterativeCondition<String>() {
#Override
public boolean filter(String word, Context<String> context) throws Exception {
return word.equals("alert");
}
})
.times(5);
PatternStream<String> patternstream = CEP.pattern(dss, pattern);
DataStream<String> alerts = patternstream
.select(new PatternSelectFunction<String, String>() {
#Override
public String select(Map<String, List<String>> in) throws Exception {
String first = in.get("first").get(0);
if(first.equals("alert")){
return ("5 or more alerts");
}
else{
return (" ");
}
}
});
alerts.print();
env.execute();
}
}
I am using solr4.0 in jetty server. I want to query solr using solrj and expecting results to be formatted in XML. So i used HttpSolrServer (CloudSolrServer and LBHttpSolrServer does not provide support for setting parser) and i set parser to Xmlparser. Moreover i am also setting SolrQuery param wt=xml.But i am not able to get results in XML.Here is my test code
package solrjtest;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.UUID;
import org.apache.solr.client.solrj.SolrQuery;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer;
import org.apache.solr.client.solrj.impl.XMLResponseParser;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.apache.solr.common.SolrDocumentList;
class SolrjTest
{
public static void main(String[] args) throws IOException, SolrServerException
{
SolrjTest solrj = new SolrjTest();
solrj.query("hello");
}
public void query(String q) throws IOException, SolrServerException
{
CommonsHttpSolrServer server = null;
String uuid = null;
boolean flag = true;
while (flag == true)
{
uuid = UUID.randomUUID().toString();
File f = new File("D:/SearchResult/" + uuid + ".txt");
if (!f.exists())
{
flag=false;
f.createNewFile();
}
}
try
{
server = new CommonsHttpSolrServer("http://skyfall:8983/solr/documents");
server.setParser(new XMLResponseParser());
}
catch (Exception e)
{
e.printStackTrace();
}
SolrQuery query = new SolrQuery();
query.setQuery(q);
query.setParam("wt", "xml");
FileWriter fw = new FileWriter("D:/SearchResult/" + uuid + ".txt");
try
{
QueryResponse qr = server.query(query);
SolrDocumentList sdl = qr.getResults();
XMLResponseParser r = new XMLResponseParser();
Object[] o = new Object[sdl.size()];
o = sdl.toArray();
for (int i = 0; i < o.length; i++)
{
System.out.println(o[i].toString());
fw.write(o[i].toString() + "\n");
}
fw.flush();
fw.close();
System.out.println("finished");
}
catch (SolrServerException e)
{
e.printStackTrace();
}
}
}
Any idea whats going wrong here ?
With that setup, the Solr server at the machine skyfall does send the response in XML and the CommonsHttpSolrServer wrapper does correctly parse the XML. However, that does not change the internal representation in the QueryResponse, which is just a thin wrapper around the Solr class NamedList.
You can (mis)use the XMLResponseWriter to get an XML representation of the full QueryResponse:
private String toXML(SolrParams request, QueryResponse response) {
XMLResponseWriter xmlWriter = new XMLResponseWriter();
Writer w = new StringWriter();
SolrQueryResponse sResponse = new SolrQueryResponse();
sResponse.setAllValues(response.getResponse());
try {
xmlWriter.write(w, new LocalSolrQueryRequest(null, request), sResponse);
} catch (IOException e) {
throw new RuntimeException("Unable to convert Solr response into XML", e);
}
return w.toString();
}
I'm trying to have a file upload element in my JSF over Google App Engine.
I have browsed the web for several alternatives but none seem to work with GAE.
I was able to do so using JSP and servlet with BlobstoreService but couldn't find a way to make it working with JSF.
As a workaround I was trying to see if there is a way to include a JSP within a JSF but I guess this isn't doable as well.
Would be thankful to get a working example.
Thanks!
First get library http://code.google.com/p/gmultipart/ and add to your project.
And than override class org.primefaces.webapp.filter.FileUploadFilter (just put in your src).
There is code of class org.primefaces.webapp.filter.FileUploadFilter:
package org.primefaces.webapp.filter;
import java.io.File;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import org.apache.commons.fileupload.FileItemFactory;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
import org.gmr.web.multipart.GFileItemFactory;
import org.primefaces.webapp.MultipartRequest;
public class FileUploadFilter implements Filter {
private final static Logger logger = Logger.getLogger(FileUploadFilter.class.getName());
private final static String THRESHOLD_SIZE_PARAM = "thresholdSize";
private final static String UPLOAD_DIRECTORY_PARAM = "uploadDirectory";
private String thresholdSize;
private String uploadDir;
public void init(FilterConfig filterConfig) throws ServletException {
thresholdSize = filterConfig.getInitParameter(THRESHOLD_SIZE_PARAM);
uploadDir = filterConfig.getInitParameter(UPLOAD_DIRECTORY_PARAM);
if(logger.isLoggable(Level.FINE))
logger.fine("FileUploadFilter initiated successfully");
}
public void doFilter(ServletRequest request, ServletResponse response, FilterChain filterChain) throws IOException, ServletException {
HttpServletRequest httpServletRequest = (HttpServletRequest) request;
boolean isMultipart = ServletFileUpload.isMultipartContent(httpServletRequest);
if(isMultipart) {
if(logger.isLoggable(Level.FINE))
logger.fine("Parsing file upload request");
//start change
FileItemFactory diskFileItemFactory = new GFileItemFactory();
/* if(thresholdSize != null) {
diskFileItemFactory.setSizeThreshold(Integer.valueOf(thresholdSize));
}
if(uploadDir != null) {
diskFileItemFactory.setRepository(new File(uploadDir));
}*/
//end change
ServletFileUpload servletFileUpload = new ServletFileUpload(diskFileItemFactory);
MultipartRequest multipartRequest = new MultipartRequest(httpServletRequest, servletFileUpload);
if(logger.isLoggable(Level.FINE))
logger.fine("File upload request parsed succesfully, continuing with filter chain with a wrapped multipart request");
filterChain.doFilter(multipartRequest, response);
} else {
filterChain.doFilter(request, response);
}
}
public void destroy() {
if(logger.isLoggable(Level.FINE))
logger.fine("Destroying FileUploadFilter");
}
}
In managed bean write method like:
public void handleFileUpload(FileUploadEvent event) {
UploadedFile uploadedFile = event.getFile();
try {
String blobKey = BlobUtils.uploadImageToBlobStore(uploadedFile.getContentType(), uploadedFile.getFileName(), uploadedFile.getContents());
this.iconKey = blobKey;
} catch (IOException e) {
log.log(Level.SEVERE, "Ошибка при попытке загрузить файл в blob-хранилище", e);
FacesMessage msg = new FacesMessage(FacesMessage.SEVERITY_ERROR, "Ошибка при попытке загрузить файл", event.getFile().getFileName() + " не загружен!");
FacesContext.getCurrentInstance().addMessage(null, msg);
return;
}
FacesMessage msg = new FacesMessage("Успешно.", event.getFile().getFileName() + " загружен.");
FacesContext.getCurrentInstance().addMessage(null, msg);
}
And that all.
First of all , I think that whatever you are doing with JSP should eventually work with JSF as well..
BUT,
If you are looking for a file upload component for JSF , that works on GAE ,
take a look at the PrimeFaces FileUpload
Here is another link that got an explanation on what to do in order it to work on GAE :Primefaces File Upload Filter
(haven't tried it myself...)