trying to set up a config file for a custom module - do I need to have a unique model for each 'resourceModel', or is it possible to have multiple table entities per model?
Is it possible to get something like this to work:
<config>...
<model>
<namespace>
<class>Namespace_Module_Model</class>
<resourceModel>module_mysq4</resourceModel>
</namespace>
<module_mysql4>
<class>Namespace_Module_Model_Mysql4</class>
<entities>
<table_1>
<table>table_1</table>
</table_1>
<table_2>
<table>table_2</table>
</table_2>
<table_3>
<table>table_3</table>
</table_3>
...
</entities>
</module_mysql4>
..</config>
and then dynamically switch between the tables through the model?
and related: Anyone know what the possible children of the are and their properites? I've seen 'entities', 'associations' and 'items' - thx
It's not really clear what you're asking here. Magento has a basic one resource to one table resource, and a one resource to many tables constructed in a specific manner resource for EAV style models.
The scenario you're describing above isn't directly supported by the system, but if you wanted to implement something like it there's nothing stopping you from implementing a resource that works any way you want.
As for the possible children, create the simple config viewer described here to get a dump of the entire merged config, and then use an xpath viewer to examine all the nodes (and their children) that you're interested in
Thx for the response & apologies if the question was unclear. After a few hours of debugging, I have it working with the following structure:
<models>
<modulename>
<class>Namespace_Modulename_Model</class>
<resourceModel>modulename_mysql4</resourceModel>
</modulename>
<modulename_type1>
<class>Namespace_Modulename_Model_Type1</class>
<resourceModel>modulename_mysql4</resourceModel>
</modulename_type1>
<modulename_type2>
<class>Namespace_Modulename_Model_Type2</class>
<resourceModel>modulename_mysql4</resourceModel>
</modulename_type2>
<modulename_mysql4>
<class>Namespace_Modulename_Model_Mysql4</class>
<entities>
<modulename>
<table>modulename</table>
</modulename>
<modulename_type1>
<table>modulename_type1</table>
</modulename_type1>
<modulename__type2>
<table>modulename_type2</table>
</modulename_type2>
</entities>
</modulename_mysql4>
</models>
So yes - there is a single table entity for each model declared (one model, one resource) but I would have assumed that each additional model/resourceModel combination would require it's own separate Model_Mysql class in it's own modulename_mysql4 node, ala:
<models>
<modulename>
<class>Namespace_Modulename_Model</class>
<resourceModel>modulename_mysql4</resourceModel>
</modulename>
<modulename_type1>
<class>Namespace_Modulename_Model_Type1</class>
<resourceModel>modulename_mysql4_type1</resourceModel>
</modulename_type1>
<modulename_type2>
<class>Namespace_Modulename_Model_Type2</class>
<resourceModel>modulename_mysql4_type2</resourceModel>
</modulename_type2>
<modulename_mysql4>
<class>Namespace_Modulename_Model_Mysql4</class>
<entities>
<modulename>
<table>modulename</table>
</modulename>
</entities>
</modulename_mysql4>
<modulename_mysql4_type1>
<class>Namespace_Modulename_Model_Mysql4_Type1</class>
<entities>
<modulename_type1>
<table>modulename_type1</table>
</modulename_type1>
</entities>
</modulename_mysql4_type1>
<modulename_mysql4_type2>
<class>Namespace_Modulename_Model_Mysql_Type2</class>
<entities>
<modulename_type2>
<table>modulename_type2</table>
</modulename_type2>
</entities>
</modulename_mysql4_type2>
</models>
but that is not the case. Would love to hear a play by play explanation. Thx for the help!
Or:
<resources>
<modulename_setup>
<setup>
<module>modulename</module>
</setup>
<connection>
<use>core_setup</use>
</connection>
</modulename_setup>
<modulename_write>
<connection>
<use>core_write</use>
</connection>
</modulename_write>
<modulename_read>
<connection>
<use>core_read</use>
</connection>
</modulename_read>
</resources>
Related
The short question is :
I want to disable stored field compression on Solr 4.3.0 index. After reading :
http://blog.jpountz.net/post/35667727458/stored-fields-compression-in-lucene-4-1
http://wiki.apache.org/solr/SimpleTextCodecExample
http://www.opensourceconnections.com/2013/06/05/build-your-own-lucene-codec/
I've decided to follow the path described there, and make my own codec. I'm pretty sure I've followed all the steps, however, when I actually try to use my codec (affectionatelly named "UncompressedStorageCodec"), I get the following error in Solr log:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
From the output I get that Solr is not picking up the jar with my custom codec, and I don't get why?
Here's all the horriffic details:
I've created a class like this:
public class UncompressedStorageCodec extends FilterCodec {
private final StoredFieldsFormat fieldsFormat = new Lucene40StoredFieldsFormat();
protected UncompressedStorageCodec() {
super("UncompressedStorageCodec", new Lucene42Codec());
}
#Override
public StoredFieldsFormat storedFieldsFormat() {
return fieldsFormat;
}
}
in package: "fr.company.project.solr.transformers.utils"
The FQDN of "FilterCodec" is: "org.apache.lucene.codecs.FilterCodec"
I've created a basic jar file out of this (exported it as jar from Eclipse).
The Solr installation I'm using to test this is the basic Solr 4.3.0 unzipped, and started via it's embedded Jetty server and using the example core.
I've placed my jar with the codec in [solrDir]\dist
In:
[solrDir]\example\solr\myCore\conf\solrconfig.xml
I've added the line:
<lib dir="../../../dist/" regex="myJarWithCodec-1.10.1.jar" />
Then in the schema.xml file, I've declared some fieldTypes that should use this codec like so:
<fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true" postingsFormat="UncompressedStorageCodec"/>
<fieldType name="string_lowercase" class="solr.TextField" positionIncrementGap="100" omitNorms="true" postingsFormat="UncompressedStorageCodec">
<!--...-->
</fieldType>
Now, if I use the DataImportHandler component to import some data into Solr, at commit time it tells me:
java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'UncompressedStorageCodec' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.
The current classpath supports the following names: [Pulsing41, SimpleText, Memory, BloomFilter, Direct, Lucene40, Lucene41]
at org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:109)
What I find strange is that the above mentioned codec jar also contains some Transformers for the DataImportHandler component. And those are picked up fine. Also, other jars placed in the dist folder (and declared in the same way in solrconfig.xml), like the jdbc driver are picked up fine. I'm guessing that for the codec there's this SPI thingy which loads things differentlly, and there's somethign he's missing...
I've also tried placing the codec jar in:
[solrDir]\example\solr-webapp\webapp\WEB-INF\lib\
as well as inside the WEB-INF\lib folder of the solr.war file, which is found in:
[solrDir]\example\webapps\
but I'm still getting the same error.
So basically, my question is, what's missing so that my codec jar is picked up by Solr?
Thanks
I'm going to answer this question myself since it sort of become moot due to some benchmarks I've made: long story short, I had arrived at the (wrong) conclusion that for really large stored fields, Solr 3.x and 4.0 (without field compression) is faster than Solr 4.1 and above (with field compression). However that was mostly due to some errors in my benchmarks. After repeating them I've obtained results where when you go from non-compressed to compressed fields even for very large stored fields, the index time is between 0% and 15% slower, which is really not bad at all, considering that afterwards queries on the compressed fields indexes are 10-20% times faster (the document fetching part).
Also, here's some remarks on how to speed up indexing:
Use the DataImportHandler plugin. It bypasses the Solr Rest (HTTP based) API and writes directly to the Lucene index.
Check out said plugins sources to see how it accomplishes this, and do your own plugin if the DataImportHandler doesn't meet your needs
If for whatever reason you want to stick to the Solr Rest API, use ConcurrentUpdateSolrServer and play around with the queue size and number of threads parameters. It will normally be a lot faster (up to 200% in my case) than the basic HttpSolrServer.
Don't forget to enable the javabin data serialization like this:
ConcurrentUpdateSolrServer solrServer = new ConcurrentUpdateSolrServer("http://some.solr.host:8983/solr", 100, 4);
solrServer.setRequestWriter(new BinaryRequestWriter());
I'm explicitly showing the code because I believe there migth be a small bug here:
If you look at the ConcurrentUpdateSolrServer constructor, you'll see that by default it already sets the request writer to binary:
//the ConcurrentUpdateSolrServer initializes HttpSolrServer objects using this constructor:
public HttpSolrServer(String baseURL, HttpClient client) {
this(baseURL, client, new BinaryResponseParser());
}
However after debugging I've noticed that if you don't explicitly call the setWriter method with the Binary writer argument, it will still use the XmlSerializer.
Going from XML to Binary serialization reduces the size of my documents about 3 times as they are being sent to the server. This makes my index times for this case about 150-200% faster.
I have recently tried and succeeded to get something very similar to work. The only difference is that I want to enable the best compression instead of no compression, and Solr defaults to the fastest compression. I also got the "SPI class [...] does not exist" error at some point, and here is what I have found out from various articles, including the ones you have linked to.
Lucene uses SPI to find the codec classes to load. Lucene requires the list of codec classes be declared in the file "org.apache.lucene.codecs.Codec", and the file must be on the class path. To get Solr to load the file: When you create your JAR file "myJarWithCodec-1.10.1.jar", make sure that it contains a file at "META-INF/services/org.apache.lucene.codecs.Codec". The file should have one full class name per line, like this:
org.apache.lucene.codecs.lucene3x.Lucene3xCodec
org.apache.lucene.codecs.lucene40.Lucene40Codec
org.apache.lucene.codecs.lucene41.Lucene41Codec
org.apache.lucene.codecs.lucene42.Lucene42Codec
fr.company.project.solr.transformers.utils.UncompressedStorageCodec
And in solrconfig.xml, replace:
<codecFactory class="solr.SchemaCodecFactory" />
with:
<codecFactory class="fr.company.project.solr.transformers.utils.UncompressedStorageCodec" />
You might also need to remove postingsFormat="UncompressedStorageCodec" from schema.xml if Solr complains. I think this particular parameter is for specifying the postings format, not the codec. Hope it helps.
Can anyone point me to a tutorial.
My main experience with Solr is indexing CSV files. But I cannot find any simple instructions/tutorial to tell me what I need to do to index pdfs.
I have seen this: http://wiki.apache.org/solr/ExtractingRequestHandler
But it makes very little sense to me. Do I need to install Tika?
Im lost - please help
With solr-4.9 (the latest version as of now), extracting data from rich documents like pdfs, spreadsheets(xls, xlxs family), presentations(ppt, ppts), documentation(doc, txt etc) has become fairly simple.
The sample code examples provided in the downloaded archive from
here contains a basic solr template project to get you started quickly.
The necessary configuration changes are as follows:
Change the solrConfig.xml to include following lines :
<lib dir="<path_to_extraction_libs>" regex=".*\.jar" />
<lib dir="<path_to_solr_cell_jar>" regex="solr-cell-\d.*\.jar" />
create a request handler as follows:
<requestHandler name="/update/extract"
startup="lazy"
class="solr.extraction.ExtractingRequestHandler" >
<lst name="defaults" />
</requestHandler>
2.Add the necessary jars from the solrExample to your project.
3.Define the schema as per your needs and fire a query like :
curl "http://localhost:8983/solr/collection1/update/extract?literal.id=1&literal.filename=testDocToExtractFrom.txt&literal.created_at=2014-07-22+09:50:12.234&commit=true" -F "myfile=#testDocToExtractFrom.txt"
go to the GUI portal and query to see the indexed contents.
Let me know if you face any problems.
You could use the dataImportHandler. The DataImortHandle will be defined at the solrconfig.xml, the configuration of the DataImportHandler should be realized in an different XML config file (data-config.xml)
For indexing pdf's you could
1.) crawl the directory to find all the pdf's using the FileListEntityProcessor
2.) reading the pdf's from an "content/index"-XML File, using the XPathEntityProcessor
If you have the list of related pdf's, use the TikaEntityProcessor
look at this http://solr.pl/en/2011/04/04/indexing-files-like-doc-pdf-solr-and-tika-integration/ (example with ppt) and this Solr : data import handler and solr cell
The hardest part of this is getting the metadata from the PDFs, using a tool like Aperture simplifies this. There must be tonnes of these tools
Aperture is a Java framework for extracting and querying full-text content and metadata from PDF files
Apeture grabbed the metadata from the PDFs and stored it in xml files.
I parsed the xml files using lxml and posted them to solr
Use the Solr, ExtractingRequestHandler. This uses Apache-Tika to parse the pdf file. I believe that it can pull out the metadata etc. You can also pass through your own metadata.
Extracting Request Handler
public class SolrCellRequestDemo {
public static void main (String[] args) throws IOException, SolrServerException {
SolrClient client = new
HttpSolrClient.Builder("http://localhost:8983/solr/my_collection").build();
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest("/update/extract");
req.addFile(new File("my-file.pdf"));
req.setParam(ExtractingParams.EXTRACT_ONLY, "true");
NamedList<Object> result = client.request(req);
System.out.println("Result: " +enter code here result);
}
This may help.
Apache Solr can now index all sort of binary files like PDF, Words, etc ... check out this doc:
https://lucene.apache.org/solr/guide/8_5/uploading-data-with-solr-cell-using-apache-tika.html
I'm new to MyBatis.
Ive been trying to configure mybatis in a webservice I'm writing but with no luck yet.
What I've done already is,
UserInfoMapper interface
UserInfoMapper.xml with mapper namespace with my UserInfoMapper interface and a select
mybatis-config.xml with typeAlias to use as result type in UserInfoMapper.xml
dataSource bean for oracle (I get connected) in datasourceContext.xml
org.mybatis.spring.mapper.MapperScannerConfigurer bean with basePackage pointing to my UserInfoMapper interface in datasourceContext.xml
sqlSessionFactory bean ie. org.mybatis.spring.SqlSessionFactoryBean with property for my dataSource and configLocation
userInfoMapper bean ie. org.mybatis.spring.mapper.MapperFactoryBean with property mapperInterface (value="is.simnn.act.web.ngs.persistence.UserInfoMapper") and sqlSessionFactory property (ref="sqlSessionFactory") in datasourceContext.xml
then in my applicationContext.xml I have following,
<import resource="classpath:META-INF/wsContext.xml" />
<import resource="classpath:META-INF/db/datasourceContext.xml" />
In my test case I keep getting NullPointerException when I call jaxws:endpoint and it leads me to my UserInfoMapper interface.
Any idea or hints to what might be wrong with my config?
Thanks,
Gunnlaugur
It is hard to comment without having more information. Can you post your UserInfoMapper.java interface, your UserInfoMapper.xml and your stack trace, please? Are you certain that the method name in your interface matches the ID of your SELECT in the XML?
The following code causes an exception when the Job table has no rows.
public List<Job> getAll(int currentPage, int pageSize) {
return this.sessionFactory.getCurrentSession()
.createCriteria(Job.class).addOrder(Order.asc("id"))
.setFirstResult(currentPage * pageSize).setMaxResults(pageSize)
.setFetchSize(pageSize).list();
}
I am using SQL Server and the JTDS driver.
The error i get is
java.sql.SQLException: ResultSet may only be accessed in a forward direction.
net.sourceforge.jtds.jdbc.JtdsResultSet.checkScrollable(JtdsResultSet.java:319)
net.sourceforge.jtds.jdbc.JtdsResultSet.absolute(JtdsResultSet.java:716)
org.apache.commons.dbcp.DelegatingResultSet.absolute(DelegatingResultSet.java:335)
org.hibernate.loader.Loader.advance(Loader.java:1469)
org.hibernate.loader.Loader.getResultSet(Loader.java:1783)
org.hibernate.loader.Loader.doQuery(Loader.java:662)
org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:224)
org.hibernate.loader.Loader.doList(Loader.java:2211)
org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2095)
org.hibernate.loader.Loader.list(Loader.java:2090)
org.hibernate.loader.criteria.CriteriaLoader.list(CriteriaLoader.java:95)
org.hibernate.impl.SessionImpl.list(SessionImpl.java:1569)
org.hibernate.impl.CriteriaImpl.list(CriteriaImpl.java:283)
The issue is associated with trying to page an empty table.
drop these:
.setFirstResult(currentPage * pageSize).setMaxResults(pageSize)
.setFetchSize(pageSize)
and you should be able to query the empty table without issue.
If you want to page the data, run a regular query first, then page the data with your query after you know you have data to page.
Adding following property to persistence.xml solves this issue for me (jboss7, hibernate4)
<property name="hibernate.jdbc.use_scrollable_resultset" value="false" />
Alternative solution with changed dialect (not checked by me) - https://forum.hibernate.org/viewtopic.php?p=2452163
I was having this same problem, and after looking around for a while I was adviced by a coworker to change the hibernate dialect from this:
org.hibernate.dialect.SQLServerDialect
to this
org.hibernate.dialect.SQLServer2005Dialect
and it solved my problem.
Posting just as a note for people to check their dialect.
I have the map config file like this
<sqlMap ..............>
<alias>
<typeAlias ......../>
</alias>
<statements>
....
<sql>....</sql>
<select cacheModel="cache-select-all">....</select>
<update>...</update>
<procedure>...</procedure>
.....
</statements>
<parameterMaps>
<parameterMap>....</parameterMap>
</parameterMaps>
<cacheModel id="cache-select-all" type="LRU" readOnly="true" serialize="false">
<flushInterval hours="24"/>
<flushOnExecute statement="InsertIOs"/>
<!--<property name="CacheSize" value="1000"/>-->
</cacheModel>
</sqlMap>
I am using ibatis (.net, if that matters) and i have one question: where to place the tags? is There a or because placing it like i did, in the statements seems not to work. What am i doing wrong?
You must reference the cacheModel you defined inside a statement tag as shown in the following link:
http://ibatis.apache.org/docs/dotnet/datamapper/ch03s08.html
Before you use it in the select statement. Order does matter here. Otherwise sql map parser wouldn't be able to validate your sql map.