Issue merging xml output using camel pollEnrich - apache-camel

Creating XML output from various sources which need to be combine for XSLT processor but getting "No consumers available on endpoint" when using pollEnrich. The pollEnrich aggregator it not being passed the pollEnrich Exchange.
Took the aggregator out and used the default aggregator. Get the same issue. Adding logs show that there is XML output coming from the previous routes to the pollEnrich end points.
package com.hitrust.route;
import com.hitrust.aggregator.AddToOutput;
import com.hitrust.processor.ConvertResultToRecords;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.processor.aggregate.AggregationStrategy;
public class AuthoritativeSourceDocument extends RouteBuilder {
public void configure() throws Exception {
AggregationStrategy addToOutput = new AddToOutput();
restConfiguration()
.component("restlet")
.host("localhost").port("18082");
rest("/authoritativesourcedocument/{authoritativesourcedocumentid}")
.consumes("application/json").produces("application/json")
.get()
.to("direct:Start");
from("direct:Start")
.multicast()
.to("direct:GetSections")
.to("direct:GetTransactions")
.to("direct:MergeSections");
from("direct:GetSections")
.setBody(simple("SELECT * " +
" FROM [dbo].[Section] AS S" +
" WHERE [Id] = ${header.id}"))
.to("jdbc:dataSource")
.setProperty("paramName", simple("Sections"))
.process(new ConvertResultToRecords())
.to("direct:GetSectionsOutput");
from("direct:GetTransactions")
.setBody(simple("SELECT * " +
" FROM [dbo].[SectionTransaction] AS ST" +
" WHERE [Id] = ${header.id}"))
.to("jdbc:dataSource")
.setProperty("paramName", simple("SectionTransactions"))
.process(new ConvertResultToRecords())
.to("direct:GetTransactionsOutput");
from("direct:MergeSections")
.setBody(simple("<param><id>${header.id}</id></param>"))
.convertBodyTo(org.w3c.dom.Document.class)
.pollEnrich("direct:GetSectionsOutput", 500, addToOutput)
.pollEnrich("direct:GetTransactionsOutput", 500, addToOutput)
.to("xslt:file:src/main/resources/xslts/MergeSections.xsl")
}
}
To execute and have combine xml output from the last route.

Please try to use "seda:" instead of "direct:" to message from your Get* routes to pollEnrich in the MergeSections route:
[...]
from("direct:Start")
.multicast()
.to("direct:GetSections")
.to("direct:GetTransactions")
.to("direct:MergeSections");
from("direct:GetSections")
.setBody(simple("SELECT * " +
" FROM [dbo].[Section] AS S" +
" WHERE [Id] = ${header.id}"))
.to("jdbc:dataSource")
.setProperty("paramName", simple("Sections"))
.process(new ConvertResultToRecords())
.to("seda:GetSectionsOutput");
from("direct:GetTransactions")
.setBody(simple("SELECT * " +
" FROM [dbo].[SectionTransaction] AS ST" +
" WHERE [Id] = ${header.id}"))
.to("jdbc:dataSource")
.setProperty("paramName", simple("SectionTransactions"))
.process(new ConvertResultToRecords())
.to("seda:GetTransactionsOutput");
from("direct:MergeSections")
.setBody(simple("<param><id>${header.id}</id></param>"))
.convertBodyTo(org.w3c.dom.Document.class)
.pollEnrich("seda:GetSectionsOutput", 500, addToOutput)
.pollEnrich("seda:GetTransactionsOutput", 500, addToOutput)
.to("xslt:file:src/main/resources/xslts/MergeSections.xsl")
Your code also looks like you assume that multicast is dispatching to the endpoints in parallel by default - it does not. You need to add the parallelProcessing() option for that.

Related

Can't restore a flink job that uses Table API and Kafka connector with savepoint

I canceled a flink job with a savepoint, then tried to restore the job with the savepoint (just using the same jar file) but it said it cannot map savepoint state. I was just using the same jar file so I think the execution plan should be the same? Why would it have a new operator id if I didn't change the code? I wonder if it's possible to restore from savepoint for a job using Kafka connector & Table API.
Related errors:
used by: java.util.concurrent.CompletionException: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
used by: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
My Code:
public final class FlinkJob {
public static void main(String[] args) {
final String JOB_NAME = "FlinkJob";
final EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
final TableEnvironment tEnv = TableEnvironment.create(settings);
tEnv.getConfig().set("pipeline.name", JOB_NAME);
tEnv.getConfig().setLocalTimeZone(ZoneId.of("UTC"));
tEnv.executeSql("CREATE TEMPORARY TABLE ApiLog (" +
" `_timestamp` TIMESTAMP(3) METADATA FROM 'timestamp' VIRTUAL," +
" `_partition` INT METADATA FROM 'partition' VIRTUAL," +
" `_offset` BIGINT METADATA FROM 'offset' VIRTUAL," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP_LTZ(6)," +
" `OffSet` INT" +
") WITH (" +
" 'connector' = 'kafka'," +
" 'topic' = 'api.log'," +
" 'properties.group.id' = 'flink'," +
" 'properties.bootstrap.servers' = '<mykafkahost...>'," +
" 'format' = 'json'," +
" 'json.timestamp-format.standard' = 'ISO-8601'" +
")");
tEnv.executeSql("CREATE TABLE print_table (" +
" `_timestamp` TIMESTAMP(3)," +
" `_partition` INT," +
" `_offset` BIGINT," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP(6)," +
" `OffSet` INT" +
") WITH ('connector' = 'print')");
tEnv.executeSql("INSERT INTO print_table" +
" SELECT * FROM ApiLog");
}
}

Apache Flink - calculate the difference of value between two consecutive event with event time

I have some energy meters that will keep producing counter value which is a cumulative metric . i.e. Keep increasing until counter reset.
Key Value
----------------------------------------------------------------------
Sensor1 {timestamp: "10-10-2019 10:20:30", Kwh: 10}
Sensor1 {timestamp: "10-10-2019 10:20:40", Kwh: 21}
Sensor1 {timestamp: "10-10-2019 10:20:55", Kwh: 25}
Sensor1 {timestamp: "10-10-2019 10:21:05", Kwh: 37}
Sensor1 {timestamp: "10-10-2019 10:21:08", Kwh: 43}
.
.
.
There is a real-time ETL job which to do subtraction between two consecutive values in event time.
e.g.
10-10-2019 10:20:30 = 21 - 10 = 11
10-10-2019 10:20:40 = 25 - 21 = 4
10-10-2019 10:20:55 = 37 - 25 = 12
.
.
.
Moreover, sometimes the event may not be received in order.
How can I achieve by using Apache Flink Streaming API? Better with example in Java.
In general, when faced with the requirement to process an out-of-order stream in order, the easiest (and performant) way to handle this is to use Flink SQL, and rely on it to do the sorting. Note that it will rely on the WatermarkStrategy to determine when events can safely be considered ready to be emitted, and will drop any late events. If you must know about the late events, then I would recommend using CEP rather than SQL with MATCH_RECOGNIZE (as shown below).
For more about using Watermarks for sorting, see the tutorial about Watermarks in the Flink docs.
Here's an example of how to implement your use case using Flink SQL:
public class SortAndDiff {
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
DataStream<Tuple3<String, Long, Long>> input = env.fromElements(
new Tuple3<>("sensor1", "2019-10-10 10:20:30", 10L),
new Tuple3<>("sensor1", "2019-10-10 10:20:40", 21L),
new Tuple3<>("sensor2", "2019-10-10 10:20:10", 28L),
new Tuple3<>("sensor2", "2019-10-10 10:20:05", 20L),
new Tuple3<>("sensor1", "2019-10-10 10:20:55", 25L),
new Tuple3<>("sensor1", "2019-10-10 10:21:05", 37L),
new Tuple3<>("sensor2", "2019-10-10 10:23:00", 30L))
.map(new MapFunction<Tuple3<String, String, Long>, Tuple3<String, Long, Long>>() {
#Override
public Tuple3<String, Long, Long> map(Tuple3<String, String, Long> t) throws Exception {
return new Tuple3<>(t.f0, Timestamp.valueOf(t.f1).toInstant().toEpochMilli(), t.f2);
}
}).assignTimestampsAndWatermarks(
WatermarkStrategy
.<Tuple3<String, Long, Long>>forBoundedOutOfOrderness(Duration.ofMinutes(1))
.withTimestampAssigner((event, timestamp) -> event.f1));
Table events = tableEnv.fromDataStream(input,
$("sensorId"),
$("ts").rowtime(),
$("kwh"));
Table results = tableEnv.sqlQuery(
"SELECT E.* " +
"FROM " + events + " " +
"MATCH_RECOGNIZE ( " +
"PARTITION BY sensorId " +
"ORDER BY ts " +
"MEASURES " +
"this_step.ts AS ts, " +
"next_step.kwh - this_step.kwh AS diff " +
"AFTER MATCH SKIP TO NEXT ROW " +
"PATTERN (this_step next_step) " +
"DEFINE " +
"this_step AS TRUE, " +
"next_step AS TRUE " +
") AS E"
);
tableEnv
.toAppendStream(results, Row.class)
.print();
env.execute();
}
}

Kotlin: sqlite database table not creating

Iam new to Kotlin. I am developing a "personality guessing" app. it was working fine but when i added SQLite database when I run it keeps crashing when i reach to activity on which SQLite is integrated. my table does not create.
error log image here in link
Error log:
2020-06-04 13:18:10.757 16744-16744/? E/example.guessm: Unknown bits set in runtime_flags: 0x8000
2020-06-04 13:18:12.088 16744-16776/com.example.guessme E/eglCodecCommon: glUtilsParamSize: unknow param 0x000082da
2020-06-04 13:18:12.088 16744-16776/com.example.guessme E/eglCodecCommon: glUtilsParamSize: unknow param 0x000082da
2020-06-04 13:18:33.961 16744-16744/com.example.guessme E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.guessme, PID: 16744
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.guessme/com.example.guessme.QuizActivity}: java.lang.IllegalStateException: getDatabase called recursively
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3270)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3409)
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:83)
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2016)
at android.os.Handler.dispatchMessage(Handler.java:107)
at android.os.Looper.loop(Looper.java:214)
at android.app.ActivityThread.main(ActivityThread.java:7356)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930)
Caused by: java.lang.IllegalStateException: getDatabase called recursively
at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:357)
at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:317)
at com.example.guessme.DbHelper.addQuestion(DbHelper.kt:80)
at com.example.guessme.DbHelper.addQuestions(DbHelper.kt:45)
at com.example.guessme.DbHelper.onCreate(DbHelper.kt:35)
at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:412)
at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:341)
at com.example.guessme.DbHelper.getAllQuestions(DbHelper.kt:93)
at com.example.guessme.QuizActivity.onCreate(QuizActivity.kt:29)
at android.app.Activity.performCreate(Activity.java:7802)
at android.app.Activity.performCreate(Activity.java:7791)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1299)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3245)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3409) 
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:83) 
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:135) 
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:95) 
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2016) 
at android.os.Handler.dispatchMessage(Handler.java:107) 
at android.os.Looper.loop(Looper.java:214) 
at android.app.ActivityThread.main(ActivityThread.java:7356) 
at java.lang.reflect.Method.invoke(Native Method) 
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492) 
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:930) 
Code snippet of onCreate method in DbHelper class:
private val DATABASE_VERSION = 2
// Database Name
private val DATABASE_NAME = "PersonalityQuiz.db"
// tasks table name
lateinit var dbase: SQLiteDatabase
class DbHelper (context: Context) : SQLiteOpenHelper(context, DATABASE_NAME, null, DATABASE_VERSION) {
override fun onCreate(db: SQLiteDatabase) {
dbase = db
val sql = ("CREATE TABLE IF NOT EXISTS " + TABLE_QUEST + " ( "
+ KEY_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + KEY_QUES
+ " TEXT, " + KEY_OPTA + " TEXT, "
+ KEY_OPTB + " TEXT, " + KEY_OPTC + " TEXT)")
db.execSQL(sql)
addQuestions()
db.close()
}
code snippet of addQuestions function where i insert questions to database:
// Adding new question
fun addQuestion(quest: Question) {
dbase = this.writableDatabase
val values = ContentValues()
values.put(KEY_QUES, quest.getQUESTION())
values.put(KEY_OPTA, quest.getOPTA())
values.put(KEY_OPTB, quest.getOPTB())
values.put(KEY_OPTC, quest.getOPTC())
// Inserting Row
dbase.insert(TABLE_QUEST, null, values)
}
code snipppet for onUpgrade method:
override fun onUpgrade(db: SQLiteDatabase, oldV: Int, newV: Int) {
// Drop older table if existed
db.execSQL("DROP TABLE IF EXISTS $TABLE_QUEST")
// Create tables again
onCreate(db)
}
You cannot access writableDatabase within onCreate().
Either remove the addQuestions() call from onCreate(), or pass the SQLiteDatabase from onCreate() as a parameter to addQuestions().

Camel 2.21.0 - how to process on exception with streaming

I would like to log an error on exception and continue on next record/split, but it does not work.
I tired OnExcepiton(), doTry() DSL but it does not work and goes to ErrorHandler.
onException(IOException.class)
.handled(true).process(exchange -> log.error("error!!"));
from("file:" + rootDir + "/" + account + "/inbox/?move=.done")
.unmarshal(csvDataFormat)
.split(body()).shareUnitOfWork().parallelProcessing().streaming()
.process(fileService)
.end()
Logs:
2018-07-18 14:01:59.883 DEBUG 45137 --- [/test1/request/] o.a.camel.processor.MulticastProcessor : Parallel processing failed due IOException reading next record: java.io.IOException: (line 4) invalid char between encapsulated token and delimiter
2018-07-18 14:01:59.885 ERROR 45137 --- [/test1/request/] o.a.camel.processor.DeadLetterChannel : Failed delivery for (MessageId: ID-**********-local-1531936914834-0-3 on ExchangeId: ID-*********-local-1531936914834-0-4). On delivery attempt: 0 caught: java.lang.IllegalStateException: IOException reading next record: java.io.IOException: (line 4) invalid char between encapsulated token and delimiter
#Bedla, Thank you for your input, I found this working for my UseCase,
Using onException() was still sending exchange to
DeadLetterChannel, so had to use doTry()
CasvFormat with using
maps - I couldn't modify csvFormat in process, so had to
read header from file and append csv header in body on each split using setBody
Full Route Definition:
CsvDataFormat csvDataFormat = new CsvDataFormat().setUseMaps(true);
from("file:" + rootDir + "/test/")
.log(LoggingLevel.INFO,"Start processing ${file:name}")
.unmarshal().pgp(pgpFileName,pgpUserId,pgpPassword)
.process(exchange -> { /* just to get csv header */
InputStream inputStream = exchange.getIn().getBody(InputStream.class);
try(BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream))){
String header = bufferedReader.readLine();
exchange.getIn().setHeader("CSV_HEADER",header);
csvDataFormat.setHeader(header.split(",")); //<- this does not work, so had to add in body below!
System.out.println("csvHeader is : " + header);// + " ? " + Arrays.asList(csvDataFormat.getHeader()));
}
})
.split(body().tokenize("\n")).shareUnitOfWork()
.parallelProcessing().streaming()
.setBody(exchange -> exchange.getIn().getHeader("CSV_HEADER") + "\n" + exchange.getIn().getBody())
.doTry()
.unmarshal(csvDataFormat)
.process(requestFileService)
.doCatch(IOException.class)
//TODO: custom processing here...
.process(exchange -> log.error("caught in dotry: " + exchange.getIn().getBody())).stop()
.end()//end try/catch
.choice()
.when(simple("${property." + Exchange.SPLIT_COMPLETE + "} == true"))
.log(LoggingLevel.INFO, "Finished processing ${file:name}")
.end();

ServiceNow attachments in Camel

How to download or upload attachments to servicenow from camel connector. The project is setup with camel-servicenow (v2.21.0.fuse-000077-redhat-1) in maven. Creating, retrieve and updating of tickets is working fine, however, not able to download any attachments using Attachment resource.
Download :
url = "https4://"
+ instance
+ ".service-now.com/api/now/v1/attachment?sysparm_query="
+ "table_name="
+ table
+ "%5Etable_sys_id="
+ sysId
+ "&authenticationPreemptive=true&authUsername="
+ username
+ "&authPassword="
+ password
+ "&authMethod=Basic";
In route definition :
from("direct:servicenowAttachmentDownload").setHeader(Exchange.HTTP_METHOD, constant("GET")).recipientList().simple("${header.url}")
Upload :
url = "https4://"
+ instance
+ ".service-now.com/api/now/attachment/file?table_name="
+ table
+ "&table_sys_id="
+ sysId
+ "&file_name="
+ attachmentName
+ "&authenticationPreemptive=true&authUsername="
+ username
+ "&authPassword="
+ password
+ "&authMethod=Basic";
In route definition :
from("direct:servicenowAttachmentUpload").process(new Processor() {
public void process(Exchange exchange) throws Exception {
MultipartEntityBuilder multipartEntityBuilder = MultipartEntityBuilder.create();
multipartEntityBuilder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);
multipartEntityBuilder.setContentType(ContentType.MULTIPART_FORM_DATA);
String filename = (String) exchange.getIn().getHeader(Exchange.FILE_NAME);
String filePath = (String) exchange.getIn().getHeader("filePath");
String attachmentName = (String) exchange.getIn().getHeader("attachmentName");
File file = new File(filePath);
multipartEntityBuilder.addPart("upload",
new FileBody(file, ContentType.MULTIPART_FORM_DATA, attachmentName));
exchange.getIn().setBody(multipartEntityBuilder.build());
}
}).removeHeaders("CamelHttp*").setHeader(Exchange.HTTP_METHOD, constant("POST")).recipientList()
.simple("${header.url}")

Resources