How to log the required part of a file using apache camel.? - apache-camel

How to log part of the incoming data in the console and log file?
Suppose, I am getting input from File source and It contains the Sales Order. I want to log all the Sales Order number.
Input :
<root>
<records>
<salesorderno> 10002<salesorderno>
</records>
<salesorderno> 10005<salesorderno>
<records>
</records>
<records>
<salesorderno> 10032<salesorderno>
</records>
</root>

You could use an XPATH expression to get the values. Something like:
from("file:inbox")
.setHeader("salesordernos", xpath("/root/records/salesorderno"))
.to("log:loggerName?showBody=true&showHeaders=true");

Related

Having an issue with XML containing nested objects to POCO (de)serialisation (VB.Net)

I've created an SP in SQL Server that returns as XML. I decided to do this as the information has contacts and addresses in it and I wanted to reduce the amount of data I get.
<Accounts>
<Account>
<Company />
<ContactNumber />
<Addresses>
<Address>
<Line1 />
....
</Address>
<Addresses>
<Contacts>
<Contact>
<Name />
....
</Contact>
<Account>
</Accounts>
I have found SqlCommand.ExecuteXmlReader but I'm confused as to how to serialise this into my POCO. Can someone point me at what my next step is. (The POCO was created by the Insert XML as a class menu item in VS2019).
My Google fu is letting me down as I'm not sure what I should be looking for to help understand how to serialize the XML into something that will allow me to go with Foreach Account in AccountsClass type logic.
Any help is greatly appreciated.
PS The XML above is just a sample to show what I'm doing. The actual data is over 70 fields and with two FK joins the initial 40000 rows is well in excess of 1.8 million once selected as a normal dataset.
EDIT: Just in case someone stumbles on this and are in the same situation I was in.
When preparing a sample record for the Past XML to class make sure you have more than one record if you are expecting something similar to my example above. (The class changes to support more than one record.)
You get very different results when searching for deserialize when doing your research. This small change resulted in me figuring out the issue.

What does the error "The dataset refers to a shared dataset which is not available." mean?

I have a report template (.rdl file) that has an embedded dataset 'X' that refers to a shared dataset 'Y' (.rsd file). Both files are located in the same folder.
When I try to generate a report, I get the error "The dataset 'X' refers to a shared dataset 'Y' which is not available."
The datasets do not contain comma-separated strings or empty values.
UPDATE (2019-06-07): contrary to the above, the data did contain empty values. I didn't realize a filter was emptying the final result.
I've made sure that the data source is shared, as well.
Other shared datasets are recognized, but this one isn't.
I've looked at database permissions, but I've been able to run the SQL queries from the datasets in SSMS without problems, which seems to indicate that the permissions are valid. Perhaps the application using the datasets needs specific database permissions..?
This is the problematic dataset:
<?xml version="1.0" encoding="utf-8"?>
<SharedDataSet xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/01/shareddatasetdefinition">
<DataSet Name="">
<Query>
<DataSourceReference>XXX</DataSourceReference>
<DataSetParameters>
<DataSetParameter Name="#Foo">
<ReadOnly>false</ReadOnly>
<Nullable>false</Nullable>
<OmitFromQuery>false</OmitFromQuery>
<rd:DbType>Object</rd:DbType>
</DataSetParameter>
</DataSetParameters>
<CommandText>***a SQL query***</Query>
<Fields>***some fields***
</Fields>
<Filters>***some filters***
</Filters>
</DataSet>
</SharedDataSet>
This one seems to pose no problems:
<?xml version="1.0" encoding="utf-8"?>
<SharedDataSet xmlns:rd="http://schemas.microsoft.com/SQLServer/reporting/reportdesigner" xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/01/shareddatasetdefinition">
<DataSet Name="">
<Query>
<DataSourceReference>XXX</DataSourceReference>
<DataSetParameters>
<DataSetParameter Name="#FooId">
<ReadOnly>false</ReadOnly>
<Nullable>false</Nullable>
<OmitFromQuery>false</OmitFromQuery>
<rd:DbType>Object</rd:DbType>
</DataSetParameter>
<DataSetParameter Name="#BarId">
<ReadOnly>false</ReadOnly>
<Nullable>false</Nullable>
<OmitFromQuery>false</OmitFromQuery>
<rd:DbType>Object</rd:DbType>
</DataSetParameter>
</DataSetParameters>
<CommandText>***SQL QUERY***</CommandText>
</Query>
<Fields>***some fields***
</Fields>
<Filters>
<Filter>***some filters***
</Filter>
</Filters>
</DataSet>
</SharedDataSet>
UPDATE (2019-06-06): Apparently, removing the filters and ensuring that the query does not yield an empty result fixes the error. If there are filter(s) or if the query yields an empty result, the error returns. I have no clue yet why this is so.
UPDATE (2019-06-07): The filter caused the dataset to return an empty set of records (0 rows). This is what seems to cause the error. In all tests I did, cases where there's at least one row of data worked and cases where there's 0 rows showed the error.

How to convert json array to xml array for post request?

I have following post data for a JSON request which is working fine.
"customer"=>{"subdomain"=>"Test", "firstname"=>"john",
"lastname"=>"doe", "email"=>"john.doe#example.com",
"company"=>"Sample", "default_language"=>"en",
"active_modules"=>["cmdb", "evm", "itil"]}
I have also enabled XML for my server, so want to respond to XML post requests too. I tried to convert above JSON data to XML but it's not as expected.
<customer>
<active_modules>
<element>cmdb</element>
<element>evm</element>
<element>itil</element>
</active_modules>
<company>Sample</company>
<default_language>en</default_language>
<email>john.doe#example.com</email>
<firstname>john</firstname>
<lastname>doe</lastname>
<subdomain>Test</subdomain>
</customer>
Problem is with array element. How can I convert this array element to pass exactly same data as JSON requests to server?
For an array, you need to pass same node multiple times.
<customer>
<active_modules>cmdb</active_modules>
<active_modules>evm</active_modules>
<active_modules>itil</active_modules>
<company>Sample</company>
<default_language>en</default_language>
<email>john.doe#example.com</email>
<firstname>john</firstname>
<lastname>doe</lastname>
<subdomain>Test</subdomain>
</customer>

To filter out the right records from a CSV file and hence to perform the required operation using Anypoint Studio

I am working on a flow that is supposed to take a CSV file from the system and insert the contained data into a database. Some of the records in the file have the wrong format required(Eg: wrong number of columns) and hence must me written into another file of text type for analysis.
I have created a flow that inserts all the good records into the database but it does not input the bad records into a file. I am currently a beginner and hence am not sure how to proceed with it.
The XML code:
<flow name="fileFlow">
<file:inbound-endpoint path="src/main/resources/Input" moveToPattern="#[message.inboundProperties.originalFilename].zip" moveToDirectory="src/main/resources/Output" responseTimeout="10000" metadata:id="b85f6b05-1679-4b60-8bbe-30e6d2c68df7" doc:name="File">
<file:filename-regex-filter pattern=".*csv" caseSensitive="true"/>
</file:inbound-endpoint>
<file:file-to-string-transformer doc:name="File to String"/>
<set-payload value="#[payload.replaceAll(",,", ", ,")]" doc:name="Set Payload"/>
<splitter expression="#[rows=StringUtils.split(message.payload,'\r\n');ArrayUtils.subarray(rows,1,rows.size())]" doc:name="Splitter"/>
<flow-ref name="fileFlow1" doc:name="fileFlow1"/>
<catch-exception-strategy doc:name="Insert the bad record into a file">
<byte-array-to-string-transformer doc:name="Byte Array to String"/>
<set-session-variable variableName="var" value="#[payload+'var']" doc:name="Session Variable"/>
<file:outbound-endpoint path="src/main/resources/Output" outputPattern="BadRecords.txt" responseTimeout="10000" doc:name="File"/>
<flow-ref name="fileFlow1" doc:name="fileFlow1"/>
</catch-exception-strategy>
</flow>
<flow name="fileFlow1">
<expression-transformer expression="#[StringUtils.split(message.payload,',')]" doc:name="Expression"/>
<db:insert config-ref="MySQL_Configuration" doc:name="Database">
<db:parameterized-query><![CDATA[insert into GoodRecords values(#[message.payload[0]], #[message.payload[1]], #[message.payload[2]], #[message.payload[3]], #[message.payload[4]], #[message.payload[5]], #[message.payload[6]], #[message.payload[7]], #[message.payload[8]], #[message.payload[9]], #[message.payload[10]], #[message.payload[11]], #[message.payload[12]], #[message.payload[13]], #[message.payload[14]], #[message.payload[15]], #[message.payload[16]], #[message.payload[17]], #[message.payload[18]], #[message.payload[19]], #[message.payload[20]])]]></db:parameterized-query>
</db:insert>
<logger message="#[payload] " level="INFO" doc:name="Logger"/>
</flow>
The flow structure:
Flow diagram
Personally I think the flow I've created is very inefficient and wrong.
How do I enter the bad records into the file(if the given flow is right)?
I wanted to use bulk mode for the given use case (as there are around 1000 odd records to work with) but am not sure how to proceed with that as well.
In your code you have used a private flow for inserting your record to database. So any exception occurred during insertion will not be catch by the parent flow exception strategy, you can have separate exception strategy for your private flow or you may use sub-flow.
There is another clean solution for that you can use batch processing to process these records and create a separate step to handle all failure records.

libxml xmlXPathEvalExpression order

I've started using libxml in C, and I'm using the xmlXPathEvalExpression function to evaluate XPath.
My XML file actually represents a table, with each child node representing a row in that table and its attributes are the corresponding values, so the order is important.
I couldn't find information about that function in regards to its order. Does it return the nodes in the document order
for example, the following xml file:
<?xml version="1.0" encoding="UTF-8"?>
<Root>
<TABLE0>
<Row Item0="1" Item1="2"/>
<Row Item0="2" Item1="3"/>
<Row Item0="3" Item1="4"/>
</TABLE0>
<TABLE1>
<Row Item0="1" Item1="12"/>
<Row Item0="6" Item1="15"/>
</TABLE1>
</Root>
Can I be sure that after evaluating /Root/TABLE0/* and getting the nodeset, calling nodeSet->nodeTab[0] would get the first row, nodeSet->nodeTab[1] would get the second and so on?
If not, is there a way to sort it by document order?
Thanks
Yes, the results of XPath expressions evaluated with xmlXPathEvalExpression or compiled with xmlXPathCompile are always sorted in document order. (Internally, they're compiled by calling xmlXPathCompileExpr with a true sort flag.)

Resources