Nested ResultMaps with ambiguous database columns - ibatis

I have a couple of nested ResultMaps in iBatis that have exactly same database column names. This is causing ambiguity and resulting in incorrect result being retrieved for the different database tables.
For e.g.,
`
<sql namespace="Shipment">
<resultMap id="consignment" class="com.model.Consignment">
<result property="consignmentId" column="Consignment_cd" />
<result property="shipmentCd" column="Shipment_cd" />
<result property="shipmentUnit" column="Shipment_Unit" />
<result property="location" resultMap="Shipment.size" />
</resultMap>
<resultMap id="size" class="com.model.Size">
<result property="consignmentId" column="Consignment_cd" />
<result property="shipmentCd" column="Shipment_cd" />
<result property="shipmentUnit" column="Shipment_Unit" />
</resultMap>
</sql>
`
Now when I write my select query joining the Size & Consignment tables, I get the same values for Shipment Code and Shipment Unit returned, whereas there are different values for these two columns in the database. Please note that I need both the Shipment Code and Unit from both the Size and Consignment levels pulled up in a single query.
Could someone help me solve this problem?

The only solution I found to this issue is prefixing the column names with the table name or short name. You are writing the queries yourself right?
Your select would look like
select consignment.Shipment_cd as consignment_Shipment_cd,
size.Shipment_cd as size_Shipment_cd
from consignment
join size on whatever
And yes that is pretty heavy if you want to get a lot of stuff in the same query

Related

Bypass double quotes inside XML value

Check #lptr's comment for solution
I have this piece of XML from which I need to extract the values and ids using SQL Server:
<root>
<field id="1" value="gfjsdgfdjy duahsd "absdjsd"" />
<field id="37" value="ysgfdyua" />
<field id="13" value="asdas" />
<field id="73" value="fgdgfd" />
<field id="adsf" value="fdsa" />
</root>
This is what I use to extract the values and ids from that XML, which is stored into variable #test, and insert them into a temp table:
insert into #tmp (field, val)
select field.value('#id', 'nvarchar(100)') as fieldID,
field.value('#value', 'nvarchar(200)') as val
from #test.nodes('root/field') A(field)
That query works fine until there's a value that has double quotes like in the example above, which throws the followig error: XML parsing: line 1, character 108, whitespace expected
Any way of working around that?
I have to mention that I do not create these XMLs by hand, but get them from a DB, so any mistakes in their creation is not my fault.
Conformant XML parsers are required to report all well-formedness errors, so any conformant XML parser will reject this ill-formed XML.
You can sometimes get around this by using parsers (which are not conformant XML parsers) that attempt to repair errors. However, I don't know if any of them are capable of handling this particular problem, Note that in the general case it can't be detected, consider
<root>
<field id="1" value="some " inner " 3" />
<field id="2" value="some " inner= " 3" />
</root>
The second field is well-formed XML.
Whenever a supplier provides you with ill-formed XML, you need to complain, as you would with any other product defect. Usually I have found suppliers very responsive to such error reports. It's surprising how often the XML export capability was entrusted to some junior programmer with no XML knowledge or experience, even if the company concerned is a very professional outfit.

Dynamic TableName SOLR data import handler

I'm looking to configure SOLR to query a table based on certain data.
I unfortunately have to work with how the Database is setup, but here's what I'm after.
I have a table named Company that will contain a certain "prefix" value.
I want to use that prefix value to determine what tables I should query for the DIH.
As a quick sample:
<entity name="company" query="Select top 1 prefix from Company">
<field name="prefix" column="prefix"/>
<entity name="item" query="select * from ${company.prefix}item">
<field column="ItemID" name="id"/>
<field column="Description" name="description/>
</entity>
</entity>
However I only ever seem to get 1 document processed despite that table containing over 200,000 rows.
what am I doing wrong?
I think you could achieve this by:
using an stored procedure. You can call a sp from DIH as seen here
inside the stored procedure, you can do the table lookup as needed, and then return the results from the real query.
Depending on how good you are with MSSql-s SQL, you might be able to just put everything into a single SQL query and use that directly in DIH, but not sure about that.

solr sort on an unrelated entity field

My document structure is like this
<document>
<entity name="entity1" query="query1">
<field column="column1" name="column1" />
<!-- more columns specific to this entity -->
</entity>
<entity name="entity2" query="query2">
<field column="column2" name="column2" />
<!-- more columns specific to this entity -->
</entity>
</document>
In my query involving entity1 columns only, if I add entity2 columns in sort clause, why should the result be affected at all? My query is only on entity1 columns which are unrelated to entity2. Is it the case that solr apply the sort clause first on entire "documents" and then apply the query condition(s)?
Documentation reads -
If sortMissingLast="false" and sortMissingFirst="false" (the default),
then default lucene sorting will be used which places docs without the
field first in an ascending sort and last in a descending sort.
Can someone please elaborate on the bolded text?
I think the last paragraph of my question had the answer in it.
If field is missing, default sorting is used which is why my results look "affected".

Append to a Solr Index

This may be a trivial question but I am trying to append to an existing Solr index and seem to be overwriting what is there every time. I have two databases that I am getting data from and I can import data from each database individually but when I import data from one then immediately import data from the second one, the first is overwritten. I have two dataSources mapped in my db-config.xml file and I am using the standard Admin UI to run the import. My config file looks like this.
<dataConfig>
<dataSource
name="ds-1"
type="JdbcDataSource"
driver="Driver"
url="jdbc_url1"
user="user1"
password="pass1"/>
<dataSource
name="ds-2"
type="JdbcDataSource"
driver="Driver"
url="jdbc_url2"
user="user2"
password="pass2"/>
<document>
<entity name="entity1" dataSource="ds-1" query="SELECT YYY FROM TABLE">
...
</entity>
<entity name="entity2" dataSource="ds-2" query="SELECT ZZZ FROM TABLE">
...
</entity>
</document>
</dataConfig>
What can I do to prevent the original index from being overwritten. I want to incrementally add data from a variety of different sources all the time so having my indexs get wiped does me now good.
Your issue is that you are probably defining the key for your indexed documents to be the primary key id from the database and the values are overlapping. In order to prevent this, you will need to specify a unique id for Solr. Typically when I have run into this issue in the past, I have used a string field as the id field and append a character or two to the id from the database to make it unique. Example: items from Product Table would have ids like P1, P2, etc. and items from Orders Table would have ids like O1, O2, etc.
You should be able to use the Data Import Handler TemplateTransformer to help accomplish this for you.

Solr: How distinguish between multiple entities imported through DIH

When using DataImportHandler with SqlEntityProcessor, I want to have several definitions going into the same schema with different queries.
How can I search both type of entities but also distinguish their source at the same time. Example:
<document>
<entity name="entity1" query="query1">
<field column="column1" name="column1" />
<field column="column2" name="column2" />
</entity>
<entity name="entity2" query="query2">
<field column="column1" name="column1" />
<field column="column2" name="column2" />
</entity>
</document>
How to get data from entity 1 and from entity 2?
As long as your schema fields (e.g. column1, column2) are compatible between different entities, you can just run DataImportHandler and it will populate Solr collection from both queries.
Then, when you query, you will see all entities combined.
If you want to mark which entity came from which source, I would recommend adding another field (e.g. type) and assigning to it different static values in each entity definition using TemplateTransformer.
Also beware of using clean command. By default it deletes everything from the index. As you are populating the index from several sources, you need to make sure it does not delete too much. Use preImportDeleteQuery to delete only entries with the same value in the type field that you set for that entity.

Resources