Unexpected Bindy behaviour - camel - apache-camel

When working with bindy, I have create a test that provides invalid CSV input.
When looking at the documentation ( http://camel.apache.org/bindy.html ), it states:
If this field is not present in the record, than an error will be raised by the parser with the following information :
Some fields are missing (optional or mandatory), line :
But when I run my test, the invalid line is simply ignored, no errors are raised. I declare three required fields, so I'd expect an error.... What am I doing wrong?
Barry
Here are some code-snippets to clarify
The route
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
JaxbDataFormat xmlFormat = new JaxbDataFormat();
xmlFormat.setContextPath("be.smals.dp.asktutor.response");
BindyCsvDataFormat csvFormat = new BindyCsvDataFormat ("be.smals.dp.asktutor.response");
context.setTracing(true);
from("direct:marshall")
.wireTap("log:test")
.unmarshal(csvFormat)
.to("mock:marshall");
from("direct:unmarshall")
.marshal(xmlFormat)
.wireTap("log:test")
.to("mock:unmarshall");
}
};
}
Part of my test
#Test
public void testTransformFromCSVToXML() throws Exception {
// Create CSV input and process it
String payload = AskTutorResponseCSVMother.getInvalidCSVLines();
template.sendBody("direct:marshall", payload);
AskTutorsResponse askTutorsResponse =
ExchangeToObjectHelper.getAskTutorsResponseObjectFromExchange(
mockMarshall.getExchanges().get(0));
assertEquals("00000000123", askTutorsResponse.getAskTutorResponses().get(0).getSsinChild());
The input csv string
public static String getInvalidCSVLines () {
String payload = "";
payload += "00000000321;20121212" + NEWLINE;
payload += "10000000123;10000000321;20131010" + NEWLINE;
payload += "20000000123;20000000321;20100909" + NEWLINE;
return payload;
}
And my (straight-forward) bindings:
#XmlType
#XmlAccessorType(XmlAccessType.NONE)
#CsvRecord(separator = ";", skipFirstLine = false)
public class AskTutorResponse {
#DataField(pos = 1, required = true)
#XmlElement(name = "SINNChild", required = true)
private String ssinChild;
#DataField(pos = 2)
#XmlElement(name = "SINNTutor", required = true)
private String ssinTutor;
#DataField(pos = 3)
#XmlElement(name = "date", required = true)
private String date;

I've had problems where multiple classes with bindy annotations in the same package failed to properly unmarshal. The reason is that bindy tried to unmarshall each CSV line into an instance of each annotated class. My first fix was to put each bindy class into its own package. I've since written my own bindy data format that allows a single class to be specified as the unmarshal target. Here is the code.

Related

Could someone help me build a test apex for an opportunity closed/won trigger?

I'm not a developer and we don't have one currently on our staff. So I looked all over the web and modified this Apex Class to suit my needs so that when an opportunity is marked as closed/won I get a small message in Slack.
It works great and I'd like to send this to Production. However, I didn't realize that I need to include a test for this and I am stuck on what that means.
Here's my Apex Class:
public with sharing class SlackPublisher {
private static final String SLACK_URL = 'https://hooks.slack.com/services/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';
public class Oppty {
#InvocableVariable(label='Opportunity Name')
public String opptyName;
#InvocableVariable(label='Opportunity Owner')
public String opptyOwnerName;
#InvocableVariable(label='Account Name')
public String acctName;
#InvocableVariable(label='Amount')
public String amount;
}
public class UrlMethods {
String BaseUrl; // The Url w/o the page (ex: 'https://na9.salesforce.com/')
String PageUrl; // The Url of the page (ex: '/apex/SomePageName')
String FullUrl; // The full Url of the current page w/query string parameters
// (ex: 'https://na9.salesforce.com/apex/SomePageName?x=1&y=2&z=3')
String Environment; // Writing code that can detect if it is executing in production or a sandbox
// can be extremely useful especially if working with sensitive data.
public UrlMethods() { // Constructor
BaseUrl = URL.getSalesforceBaseUrl().toExternalForm(); // (Example: 'https://na9.salesforce.com/')
}
}
#InvocableMethod(label='Post to Slack')
public static void postToSlack ( List<Oppty> opps ) {
Oppty o = opps[0]; // bulkify the code later
Map<String,Object> msg = new Map<String,Object>();
msg.put('text','Deal ' + o.opptyName + ' was just Closed/Won' + ':champagne:' + '\n' + 'for a total of ' + '$' + o.amount);
msg.put('mrkdwn', true);
String body = JSON.serialize(msg);
System.enqueueJob(new QueueableSlackPost(SLACK_URL, 'POST', body));
}
public class QueueableSlackPost implements System.Queueable, Database.AllowsCallouts {
private final String url;
private final String method;
private final String body;
public QueueableSlackPost(String url, String method, String body) {
this.url = url;
this.method = method;
this.body = body;
}
public void execute(System.QueueableContext ctx) {
HttpRequest req = new HttpRequest();
req.setEndpoint(url);
req.setMethod(method);
req.setBody(body);
Http http = new Http();
HttpResponse res = http.send(req);
}
}
}
and what I found online as a base for a test was this:
#isTest
private class SlackOpportunityPublisherTest {
private class RestMock implements HttpCalloutMock {
public HTTPResponse respond(HTTPRequest req) {
String fullJson = 'your Json Response';
HTTPResponse res = new HTTPResponse();
res.setHeader('Content-Type', 'text/json');
res.setBody(fullJson);
res.setStatusCode(200);
return res;
}
}
static testMethod void service_call() {
Test.setMock(HttpCalloutMock.class, new RestMock());
Test.startTest();
//your webserive call code
Database.GetUpdatedResult r =
Database.getUpdated(
'amount',
Datetime.now().addHours(-1),
Datetime.now());
Test.StopTest();
}
}
When I try to validate this in production it says it only gives me 68% coverage and I need 75%. Can someone help me write the test so that I can put into Prod?

why getSideOutput emit nothing?

I using getSideOutput to create a side output stream, Presence of element in the pre-processing stream before processing with getSideOutput, but when calling getSideOutput method, nothing element is emitted.
code as follow
DataStream<String> asyncTable =
join3
.flatMap(new ExtractList())
.process( // detect code using for test
new ProcessFunction<String, String>() {
#Override
public void processElement(String value, Context ctx, Collector<String> out)
throws Exception {
System.out.println(value); // can detect elements
}
})
.getSideOutput(new OutputTag<>("asyTab", TypeInformation.of(String.class)));
but when calling getSideOutput method after
DataStream<String> asyncTable =
join3
.flatMap(new ExtractList())
.getSideOutput(new OutputTag<>("asyTab", TypeInformation.of(String.class)))
.process(
new ProcessFunction<String, String>() {
#Override
public void processElement(String value, Context ctx, Collector<String> out)
throws Exception {
System.out.println(value); // nothing detect elements
}
});
ExtractList as follows
import org.apache.flink.util.Collector;
public class ExtractList extends RichFlatMapFunction<NewTableA, String> {
#Override
public void flatMap(NewTableA value, Collector<String> out) throws Exception {
String tableName = "NewTableA";
String primaryKeyName = "PA1";
String primaryValue = value.getPA1().toString();
String result = tableName+":"+primaryKeyName+":"+primaryValue;
//System.out.println(result); // right result output
out.collect(result);
}
}
why getSideOutput to create a side output stream with nothing elements.
The same output tag id should be used - in your case, it's asyncTableValue in ExtractList and asyTab in .getSideOutput(new OutputTag<>("asyTab", TypeInformation.of(String.class))); which are definetely different and therefore asyTab side output emits nothing.
sorry, it is my mistake.
I don't code in ExtractList
public class ExtractList extends ProcessFunction<NewTableA, NewTableA> {
private OutputTag<String> asyncTableValue =
new OutputTag<String>("asyncTableValue", TypeInformation.of(String.class));
#Override
public void processElement(NewTableA value, Context ctx, Collector<NewTableA> out)
throws Exception {
String tableName = "NewTableA";
String primaryKeyName = "PA1";
String primaryValue = value.getPA1().toString();
String result = tableName + ":" + primaryKeyName + ":" + primaryValue;
ctx.output(asyncTableValue, result);
out.collect(value);
}
}

Serializing childdocuments, fields spring data solr

I am trying to persist nested documents in solr. I have tried both Field(child= true) and #Childdocument annotation.
With #Field(child=true) & #ChildDocument i get
[doc=null] missing required field: id, retry=0 commError=false
errorCode=400
I tried #Indexed with #Id. Also tried to specify required = false.
#Service
public class NavigationMapper implements DocumentMapper<Navigation> {
#Override
public SchemaDefinition getSchemaDefinition() {
SchemaDefinition sd = new SchemaDefinition();
sd.setName(Navigation.class.getSimpleName());
sd.setFields(new ArrayList<>());
for(Method method : Navigation.class.getDeclaredMethods()){
if(method.getName().startsWith("get") && method.getParameterCount() <= 0){
SchemaDefinition.FieldDefinition fd = new SchemaDefinition.FieldDefinition();
String fieldName = method.getName().replace("get", "");
fd.setName(fieldName.replace(fieldName.charAt(0), fieldName.toLowerCase().charAt(0)));
fd.setRequired(false);
fd.setStored(true);
sd.getFields().add(fd);
}
}
return sd;
}
With #Field i get the package name in the stored value :
activePage:
["org.apache.solr.common.SolrInputField:activePage=Page(id=blt6134c9cf711a7c27,
and get below error while i try to read in my rest controller.
No converter found capable of converting from type [java.lang.String]
to type [com.blizzard.documentation.data.shared.model.page.Page]
I did a lot of reading about this and haven't been able to successfully configure and persist the nested document in SolrDb. I read that i could use SolrJConverter for this , but not sucessfull with it either. Is there any working example i can refer to or tutorial about this feature?
#Data
#SolrDocument(collection = "Navigation")
public class Navigation implements Serializable {
#Id
//#Indexed(required = false)
private String id;
#Field
#Indexed(name = "path", type = "string")
private String path;
#ChildDocument
private Page navigationRoot;
#ChildDocument
private Page activePage;
}
#Data
#SolrDocument(collection = "Page")
public class Page implements Serializable {
#Id
// #Indexed(name= "id", type="string", required = false)
private String id;
#Field
#Indexed(name= "path", type="string")
private String path;
#Field
private PageContentType contentType;
#ChildDocument
private List<Page> children;
#Indexed("root_b")
private boolean root;
}
I expect to store the nested document inthe solrDb.

Does Flink SQL support Java Map types?

I'm trying to access a key from a map using Flink's SQL API. It fails with the error Exception in thread "main" org.apache.flink.table.api.TableException: Type is not supported: ANY
Please advise how i can fix it.
Here is my event class
public class EventHolder {
private Map<String,String> event;
public Map<String, String> getEvent() {
return event;
}
public void setEvent(Map<String, String> event) {
this.event = event;
}
}
Here is the main class which submits the flink job
public class MapTableSource {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<EventHolder> mapEventStream = env.fromCollection(getMaps());
// register a table and use SQL
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
tableEnv.registerDataStream("mapEvent", mapEventStream);
//tableEnv.registerFunction("orderSizeType", new OrderSizeType());
Table alerts = tableEnv.sql(
"select event['key'] from mapEvent ");
DataStream<String> alertStream = tableEnv.toAppendStream(alerts, String.class);
alertStream.filter(new FilterFunction<String>() {
private static final long serialVersionUID = -2438621539037257735L;
#Override
public boolean filter(String value) throws Exception {
System.out.println("Key value is:"+value);
return value!=null;
}
});
env.execute("map-tablsource-job");
}
private static List<EventHolder> getMaps(){
List<EventHolder> list = new ArrayList<>();
for(int i=0;i<5;i++){
EventHolder holder = new EventHolder();
Map<String,String> map = new HashMap<>();
map.put("key", "value");
holder.setEvent(map);
list.add(holder);
}
return list;
}
}
When I run it I'm getting the exception
Exception in thread "main" org.apache.flink.table.api.TableException: Type is not supported: ANY
at org.apache.flink.table.api.TableException$.apply(exceptions.scala:53)
at org.apache.flink.table.calcite.FlinkTypeFactory$.toTypeInfo(FlinkTypeFactory.scala:341)
at org.apache.flink.table.plan.logical.LogicalRelNode$$anonfun$12.apply(operators.scala:530)
at org.apache.flink.table.plan.logical.LogicalRelNode$$anonfun$12.apply(operators.scala:529)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.flink.table.plan.logical.LogicalRelNode.<init>(operators.scala:529)
at org.apache.flink.table.api.TableEnvironment.sql(TableEnvironment.scala:503)
at com.c.p.flink.MapTableSource.main(MapTableSource.java:25)
I'm using flink 1.3.1
I think the problem lies in fromCollection. Flink is not able to extract the needed type information because of Java limitations (i.e. type erasure). Therefore you map is treated as black box with SQL ANY type. You can verify the types of your table by using tableEnv.scan("mapEvent").printSchema(). You can specify the type information in fromCollection with Types.MAP(Types.STRING, Types.STRING).
I solved a similar issue with the following:
//Should probably make MapVal more generic, but works for this example
public class MapVal extends ScalarFunction {
public String eval(Map<String, String> obj, String key) {
return obj.get(key);
}
}
public class Car {
private String make;
private String model;
private int year;
private Map<String, String> attributes;
//getters/setters...
}
//After registering Stream and TableEnv etc
tableEnv.registerFunction("mapval", new MapVal());
Table cars = tableEnv
.scan("Cars")
.select("make, model, year, attributes.mapval('name')");

How to convert byte[] to Object and vice versa in Groovy

I am trying to convert a byte[] to an Object using Groovy. My actual Groovy Class that is represented by the byte array implements the Serializable interface and is stored in a separate Groovy class file. However I always get a ClassNotFoundException of this class when trying to call my toObject function. My code is written in Java and works when using Java but not when using Groovy.
private static byte[] toByteArray(Object obj) {
byte[] bytes = null;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
try {
ObjectOutputStream oos = new ObjectOutputStream(bos);
oos.writeObject(obj);
oos.flush();
oos.close();
bos.close();
bytes = bos.toByteArray();
} catch (Exception ex) {
ex.printStackTrace();
}
return bytes;
}
private static Object toObject(byte[] bytes) {
Object obj = null;
try {
ByteArrayInputStream bis = new ByteArrayInputStream(bytes);
ObjectInputStream ois = new ObjectInputStream(bis);
obj = ois.readObject();
} catch (Exception ex) {
ex.printStackTrace(); // ClassNotFoundException
}
return obj;
}
What is the best way to do this?
Edit: A class that would be used in this context where the ClassNotFoundException would occur is something like this:
public class MyItem implements Serializable {
/**
*
*/
private static final long serialVersionUID = -615551050422222952L;
public String text
MyItem() {
this.text = ""
}
}
Then testing the whole thing:
void test() {
MyItem item1 = new MyItem ()
item1.text = "bla"
byte[] bytes = toByteArray(item1) // works
Object o = toObject(bytes) // ClassNotFoundException: MyItem
MyItem item2 = (MyItem) o
System.out.print(item.text + " <--> " + item2.text)
}
My guess (and this is what #JustinPiper suggested as well) is that you are not compiling your groovy class before referencing it. You must compile your groovy class using groovyc and make sure that the compiled class file is placed on the classpath (probably in the same file as the java test class).

Resources