Can solr produce highlight position info for Adobe XML highlighting spec? - solr

There is a PDF highlighting spec from Adobe that allows one to provide an XML file describing highlight locations to a PDF viewer API. Can Solr produce the highlighting info necessary to produce such a file? Mandatory info would be: page, position, and length. If the answer is yes, any tips on configuring the Solr highlighting component?
Sample XML higlighting file:
<XML>
<Body units='characters' color='#ff00ff' mode='active' version='2'>
<Highlight>
<loc pg='0' pos='0' len='6' />
<loc pg='2' pos='1' len='10' />
</Highlight>
</Body>
</XML>

Related

While importing product XML in demandware how to append images in same product?

I have two product XML look like:
1:- product-1.xml
.
.
.
.
<images merge-mode="add">
<image-group view-type="large">
<image path="product-123.jpg" />
</image-group>
</images>
.
.
2:- product-1-1.xml
.
.
.
.
<images merge-mode="add">
<image-group view-type="large">
<image path="product-124.jpg" />
<image path="product-125.jpg" />
</image-group>
</images>
.
.
I am importing both the files and I want to append the images for the same product (PRODUCT123) as
<images merge-mode="add">
<image-group view-type="large">
<image path="product-123.jpg" />
<image path="product-124.jpg" />
<image path="product-125.jpg" />
</image-group>
</images>
but it's not appending the images.
I used merge-mode="merge" also for the same but not getting the result as I expected.
Could anyone help me out that where I am doing wrong?
Sadly, what you want to achieve is currently not supported by the Salesforce B2C Commerce platform.
You cannot split the images of an image group into several files and expect that they will be merged.
Indeed, the file import mode should be MERGE, but what you have tried as element merge-mode="add" is not supported, and you should have received a warning when you have imported the file.
If you look at catalog.xsd schema from the documentation you would see the following under complexType.Product.Images type definition:
<xsd:attribute name="merge-mode" type="simpleType.MergeMode" default="merge" use="optional">
<xsd:annotation>
<xsd:documentation>
Used to control if specified image groups will be merged to or replace the existing image specification.
The values "merge" and "replace" are the only ones supported for the "merge-mode" attribute.
Attribute should only be used in import MERGE and UPDATE modes. In import REPLACE mode, using the "merge-mode" attribute is not
sensible, because existing image groups will always be removed before importing the image groups
specified in the import file.
</xsd:documentation>
</xsd:annotation>
</xsd:attribute>
P.S. I would suggest that you look for an alternative solution for merging the data about images before sending it to Salesforce B2C Commerce instance.
Are you using ImportCatalog pipelet? Please check job configuration, import mode should be MERGE.

Solr 4: disable compression on stored fields: how to actually configure custom codec?

The short question is :
I want to disable stored field compression on Solr 4.3.0 index. After reading :
http://blog.jpountz.net/post/35667727458/stored-fields-compression-in-lucene-4-1
http://wiki.apache.org/solr/SimpleTextCodecExample
http://www.opensourceconnections.com/2013/06/05/build-your-own-lucene-codec/
I've decided to follow the path described there, and make my own codec. I'm pretty sure I've followed all the steps, however, when I actually try to use my codec (affectionatelly named "UncompressedStorageCodec"), I get the following error in Solr log:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
From the output I get that Solr is not picking up the jar with my custom codec, and I don't get why?
Here's all the horriffic details:
I've created a class like this:
public class UncompressedStorageCodec extends FilterCodec {
private final StoredFieldsFormat fieldsFormat = new Lucene40StoredFieldsFormat();
protected UncompressedStorageCodec() {
super("UncompressedStorageCodec", new Lucene42Codec());
}
#Override
public StoredFieldsFormat storedFieldsFormat() {
return fieldsFormat;
}
}
in package: "fr.company.project.solr.transformers.utils"
The FQDN of "FilterCodec" is: "org.apache.lucene.codecs.FilterCodec"
I've created a basic jar file out of this (exported it as jar from Eclipse).
The Solr installation I'm using to test this is the basic Solr 4.3.0 unzipped, and started via it's embedded Jetty server and using the example core.
I've placed my jar with the codec in [solrDir]\dist
In:
[solrDir]\example\solr\myCore\conf\solrconfig.xml
I've added the line:
<lib dir="../../../dist/" regex="myJarWithCodec-1.10.1.jar" />
Then in the schema.xml file, I've declared some fieldTypes that should use this codec like so:
<fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true" postingsFormat="UncompressedStorageCodec"/>
<fieldType name="string_lowercase" class="solr.TextField" positionIncrementGap="100" omitNorms="true" postingsFormat="UncompressedStorageCodec">
<!--...-->
</fieldType>
Now, if I use the DataImportHandler component to import some data into Solr, at commit time it tells me:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
What I find strange is that the above mentioned codec jar also contains some Transformers for the DataImportHandler component. And those are picked up fine. Also, other jars placed in the dist folder (and declared in the same way in solrconfig.xml), like the jdbc driver are picked up fine. I'm guessing that for the codec there's this SPI thingy which loads things differentlly, and there's somethign he's missing...
I've also tried placing the codec jar in:
[solrDir]\example\solr-webapp\webapp\WEB-INF\lib\
as well as inside the WEB-INF\lib folder of the solr.war file, which is found in:
[solrDir]\example\webapps\
but I'm still getting the same error.
So basically, my question is, what's missing so that my codec jar is picked up by Solr?
Thanks
I'm going to answer this question myself since it sort of become moot due to some benchmarks I've made: long story short, I had arrived at the (wrong) conclusion that for really large stored fields, Solr 3.x and 4.0 (without field compression) is faster than Solr 4.1 and above (with field compression). However that was mostly due to some errors in my benchmarks. After repeating them I've obtained results where when you go from non-compressed to compressed fields even for very large stored fields, the index time is between 0% and 15% slower, which is really not bad at all, considering that afterwards queries on the compressed fields indexes are 10-20% times faster (the document fetching part).
Also, here's some remarks on how to speed up indexing:
Use the DataImportHandler plugin. It bypasses the Solr Rest (HTTP based) API and writes directly to the Lucene index.
Check out said plugins sources to see how it accomplishes this, and do your own plugin if the DataImportHandler doesn't meet your needs
If for whatever reason you want to stick to the Solr Rest API, use ConcurrentUpdateSolrServer and play around with the queue size and number of threads parameters. It will normally be a lot faster (up to 200% in my case) than the basic HttpSolrServer.
Don't forget to enable the javabin data serialization like this:
ConcurrentUpdateSolrServer solrServer = new ConcurrentUpdateSolrServer("http://some.solr.host:8983/solr", 100, 4);
solrServer.setRequestWriter(new BinaryRequestWriter());
I'm explicitly showing the code because I believe there migth be a small bug here:
If you look at the ConcurrentUpdateSolrServer constructor, you'll see that by default it already sets the request writer to binary:
//the ConcurrentUpdateSolrServer initializes HttpSolrServer objects using this constructor:
public HttpSolrServer(String baseURL, HttpClient client) {
this(baseURL, client, new BinaryResponseParser());
}
However after debugging I've noticed that if you don't explicitly call the setWriter method with the Binary writer argument, it will still use the XmlSerializer.
Going from XML to Binary serialization reduces the size of my documents about 3 times as they are being sent to the server. This makes my index times for this case about 150-200% faster.
I have recently tried and succeeded to get something very similar to work. The only difference is that I want to enable the best compression instead of no compression, and Solr defaults to the fastest compression. I also got the "SPI class [...] does not exist" error at some point, and here is what I have found out from various articles, including the ones you have linked to.
Lucene uses SPI to find the codec classes to load. Lucene requires the list of codec classes be declared in the file "org.apache.lucene.codecs.Codec", and the file must be on the class path. To get Solr to load the file: When you create your JAR file "myJarWithCodec-1.10.1.jar", make sure that it contains a file at "META-INF/services/org.apache.lucene.codecs.Codec". The file should have one full class name per line, like this:
org.apache.lucene.codecs.lucene3x.Lucene3xCodec
org.apache.lucene.codecs.lucene40.Lucene40Codec
org.apache.lucene.codecs.lucene41.Lucene41Codec
org.apache.lucene.codecs.lucene42.Lucene42Codec
fr.company.project.solr.transformers.utils.UncompressedStorageCodec
And in solrconfig.xml, replace:
<codecFactory class="solr.SchemaCodecFactory" />
with:
<codecFactory class="fr.company.project.solr.transformers.utils.UncompressedStorageCodec" />
You might also need to remove postingsFormat="UncompressedStorageCodec" from schema.xml if Solr complains. I think this particular parameter is for specifying the postings format, not the codec. Hope it helps.

how to configure XsltUpdateRequestHandler

I'm looking for a simple example of configuring XsltUpdateRequestHandler.
The SOLR config in 3.4.0 is minimal:
<!-- XSLT Update Request Handler
Transforms incoming XML with stylesheet identified by tr=
-->
<requestHandler name="/update/xslt"
startup="lazy"
class="solr.XsltUpdateRequestHandler" />
And the SOLR wiki page is virtually non-existent ("TODO: Write a better documentation"):
http://wiki.apache.org/solr/XsltUpdateRequestHandler
I guess I really want to know how to point to a specific XSL file, because I find the line "Transforms incoming XML with stylesheet identified by tr=" a bit cryptic.
You could simply add an XSL style sheet to the solr/conf/xslt directory.
You can then use the XsltUpdateRequestHandler, and specify that stylesheet when you index the document.
For example:
curl "http://localhost:8983/solr/update/xslt?commit=true&tr=rss2solr.xsl" -H "Content-Type: text/xml" --data-binary #blogrss.xml
For details, a nice article explains in detail. (Free to download)

Indexing PDF with Solr

Can anyone point me to a tutorial.
My main experience with Solr is indexing CSV files. But I cannot find any simple instructions/tutorial to tell me what I need to do to index pdfs.
I have seen this: http://wiki.apache.org/solr/ExtractingRequestHandler
But it makes very little sense to me. Do I need to install Tika?
Im lost - please help
With solr-4.9 (the latest version as of now), extracting data from rich documents like pdfs, spreadsheets(xls, xlxs family), presentations(ppt, ppts), documentation(doc, txt etc) has become fairly simple.
The sample code examples provided in the downloaded archive from
here contains a basic solr template project to get you started quickly.
The necessary configuration changes are as follows:
Change the solrConfig.xml to include following lines :
<lib dir="<path_to_extraction_libs>" regex=".*\.jar" />
<lib dir="<path_to_solr_cell_jar>" regex="solr-cell-\d.*\.jar" />
create a request handler as follows:
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults" />
</requestHandler>
2.Add the necessary jars from the solrExample to your project.
3.Define the schema as per your needs and fire a query like :
curl "http://localhost:8983/solr/collection1/update/extract?literal.id=1&literal.filename=testDocToExtractFrom.txt&literal.created_at=2014-07-22+09:50:12.234&commit=true" -F "myfile=#testDocToExtractFrom.txt"
go to the GUI portal and query to see the indexed contents.
Let me know if you face any problems.
You could use the dataImportHandler. The DataImortHandle will be defined at the solrconfig.xml, the configuration of the DataImportHandler should be realized in an different XML config file (data-config.xml)
For indexing pdf's you could
1.) crawl the directory to find all the pdf's using the FileListEntityProcessor
2.) reading the pdf's from an "content/index"-XML File, using the XPathEntityProcessor
If you have the list of related pdf's, use the TikaEntityProcessor
look at this http://solr.pl/en/2011/04/04/indexing-files-like-doc-pdf-solr-and-tika-integration/ (example with ppt) and this Solr : data import handler and solr cell
The hardest part of this is getting the metadata from the PDFs, using a tool like Aperture simplifies this. There must be tonnes of these tools
Aperture is a Java framework for extracting and querying full-text content and metadata from PDF files
Apeture grabbed the metadata from the PDFs and stored it in xml files.
I parsed the xml files using lxml and posted them to solr
Use the Solr, ExtractingRequestHandler. This uses Apache-Tika to parse the pdf file. I believe that it can pull out the metadata etc. You can also pass through your own metadata.
Extracting Request Handler
public class SolrCellRequestDemo {
public static void main (String[] args) throws IOException, SolrServerException {
SolrClient client = new
HttpSolrClient.Builder("http://localhost:8983/solr/my_collection").build();
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest("/update/extract");
req.addFile(new File("my-file.pdf"));
req.setParam(ExtractingParams.EXTRACT_ONLY, "true");
NamedList<Object> result = client.request(req);
System.out.println("Result: " +enter code here result);
}
This may help.
Apache Solr can now index all sort of binary files like PDF, Words, etc ... check out this doc:
https://lucene.apache.org/solr/guide/8_5/uploading-data-with-solr-cell-using-apache-tika.html

Overlaying Multiple KML files on google Map displayed in the browser

Need to know the procedure for overlaying multiple KML files on a single Google Map that is displayed in the browser. The KML files intended for this can point to different locations. Ex:KML1 for North America & KML2 for Asia. Could anyone help me out in this.
I think you are looking for this:
http://code.google.com/apis/kml/documentation/kml_tut.html#network_links
example:
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2">
<Folder>
<name>Network Links</name>
<visibility>0</visibility>
<open>0</open>
<description>Network link example 1</description>
<NetworkLink>
<name>Random Placemark</name>
<visibility>0</visibility>
<open>0</open>
<description>A simple server-side script that generates a new random
placemark on each call</description>
<refreshVisibility>0</refreshVisibility>
<flyToView>0</flyToView>
<Link>
<href>http://yourserver.com/map1.kml</href>
</Link>
</NetworkLink>
<NetworkLink>
<name>Random Placemark</name>
<visibility>0</visibility>
<open>0</open>
<description>A simple server-side script that generates a new random
placemark on each call</description>
<refreshVisibility>0</refreshVisibility>
<flyToView>0</flyToView>
<Link>
<href>http://yourserver.com/map2.kml</href>
</Link>
</NetworkLink>
</Folder>
</kml>
You can do it with Google Maps JS API. You need to create overlays with google.maps.KmlLayer for each KML files.
See this example: http://code.google.com/apis/maps/documentation/javascript/examples/layer-kml.html
API Documentation:
http://code.google.com/apis/maps/documentation/javascript/overlays.html#KMLLayers

Resources