log4j2 queue/topic configuration by spring - database

Are there any way to configure log4j2 to read Appender attributes from for example a spring bean? I am curious especially in JmsAppender to dynamically set it's target destination based on a parameter read from database and not from JNDI context.
BR
Zoltán

Your best chance is to extend the JMSAppender and override the append methods in the logger. A good example is here
This case , the class extends and uses AMQ to post these messages into. You should be able to extend this from the DB and use API's to get a handle to the Queue or Topic and start appending messages into it. This assumes that you have the right client libraries and permissions to connect to the messaging provider (e.g. in WMQ you may need the QM Name , Queue , Host, port) from the DB (in your case). The extended JMS appender can then be used in your LOG4J2 configuration for sending log messages.

It seems that i found a hybrid soution which is very useful, custom JmsAppender combined with spring context:
#Plugin(name = "OwnJmsAppender", category = "Core", elementType = "appender", printObject = true)
public class OwnJmsAppender extends AbstractAppender {
private final Lock lock = new ReentrantLock();
private Session session;
private Connection connection;
private Destination destination;
private MessageProducer producer;
protected OwnJmsAppender(String name, Filter filter, Layout<? extends Serializable> layout, final boolean ignoreExceptions) {
super(name, filter, layout, ignoreExceptions);
init();
}
#Override
public void append(LogEvent le) {
this.lock.lock();
try {
if (connection == null) {
init();
}
byte[] bytes = getLayout().toByteArray(le);
TextMessage message = session.createTextMessage(new String(bytes, Charset.forName("UTF-8")));
producer.send(message);
} catch (JMSException e) {
LOGGER.error(e);
} finally {
this.lock.unlock();
}
}
#Override
public void stop() {
super.stop();
try {
session.close();
connection.close();
} catch (JMSException e) {
LOGGER.error(e);
}
}
/**
* Reading attributes from log4j2.xml configuration by {#link PluginElement}
* annotation. Also initiates the logger.
*
* #param name
* #param layout
* #param filter
* #return
*/
#PluginFactory
public static OwnJmsAppender createAppender(#PluginAttribute("name") String name,
#PluginElement("PatternLayout") Layout<? extends Serializable> layout, #PluginElement("Filter") final Filter filter) {
if (name == null) {
LOGGER.error("No name provided for OwnJmsAppender");
return null;
}
return new OwnJmsAppender(name, filter, getLayout(layout), true);
}
private static Layout<? extends Serializable> getLayout(Layout<? extends Serializable> layout) {
Layout<? extends Serializable> finalLayout = layout;
if (finalLayout == null) {
finalLayout = PatternLayout.createDefaultLayout();
}
return finalLayout;
}
private void init() {
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(CommonDbConfig.class);
ParameterStorage parameterStorage = (DatabaseParameterStorage) context.getBean("databaseParameterStorage");
// the parameterStorage springbean reads params from database
String brokerUri = parameterStorage.getStringValue("broker.url");
String queueName = "logQueue";
context.close();
try {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUri);
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
destination = session.createQueue(queueName);
producer = session.createProducer(destination);
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
} catch (JMSException e) {
LOGGER.error(e);
}
}
}
And call it from log4j2.xml:
<Configuration>
<Appenders>
<OwnJmsAppender name="jmsQueue">
<PatternLayout pattern="%maxLen{%d{DEFAULT} [%p] - %m %xEx%n}{500}" />
</OwnJmsAppender>
</Appenders>
<Loggers>
<Logger name="com.your.package" level="info" additivity="false">
<AppenderRef ref="jmsQueue" />
</Logger>
</Loggers>
</Configuration>

Related

BroadcastProcessFunction Processing Delay

I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false

How to browse messages from a queue using Apache Camel?

I need to browse messages from an active mq using Camel route without consuming the messages.
The messages in the JMS queue are to be read(only browsed and not consumed) and moved to a database while ensuring that the original queue remains intact.
public class CamelStarter {
private static CamelContext camelContext;
public static void main(String[] args) throws Exception {
camelContext = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(ActiveMQConnectionFactory.DEFAULT_BROKER_URL);
camelContext.addComponent("jms", JmsComponent.jmsComponent(connectionFactory));
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("jms:queue:testQueue").to("browse:orderReceived") .to("jms:queue:testQueue1");
}
}
);
camelContext.start();
Thread.sleep(1000);
inspectReceivedOrders();
camelContext.stop();
}
public static void inspectReceivedOrders() {
BrowsableEndpoint browse = camelContext.getEndpoint("browse:orderReceived", BrowsableEndpoint.class);
List<Exchange> exchanges = browse.getExchanges();
System.out.println("Browsing queue: "+ browse.getEndpointUri() + " size: " + exchanges.size());
for (Exchange exchange : exchanges) {
String payload = exchange.getIn().getBody(String.class);
String msgId = exchange.getIn().getHeader("JMSMessageID", String.class);
System.out.println(msgId + "=" +payload);
}
As far as I know, not possible in Camel to read (without consuming !) JMS messages...
The only workaround I found (in a JEE app) was to define a startup EJB with a timer, holding a QueueBrowser, and delegating the msg processing to a Camel route:
#Singleton
#Startup
public class MyQueueBrowser {
private TimerService timerService;
#Resource(mappedName="java:/jms/queue/com.company.myqueue")
private Queue sourceQueue;
#Inject
#JMSConnectionFactory("java:/ConnectionFactory")
private JMSContext jmsContext;
#Inject
#Uri("direct:readMessage")
private ProducerTemplate camelEndpoint;
#PostConstruct
private void init() {
TimerConfig timerConfig = new TimerConfig(null, false);
ScheduleExpression se = new ScheduleExpression().hour("*").minute("*/"+frequencyInMin);
timerService.createCalendarTimer(se, timerConfig);
}
#Timeout
public void scheduledExecution(Timer timer) throws Exception {
QueueBrowser browser = null;
try {
browser = jmsContext.createBrowser(sourceQueue);
Enumeration<Message> msgs = browser.getEnumeration();
while ( msgs.hasMoreElements() ) {
Message jmsMsg = msgs.nextElement();
// + here: read body and/or properties of jmsMsg
camelEndpoint.sendBodyAndHeader(body, myHeaderName, myHeaderValue);
}
} catch (JMSRuntimeException jmsException) {
...
} finally {
browser.close();
}
}
}
Apache camel browse component is exactly designed for that. Check here for the documentation.
Can't say more since you have not provided any other information.
Let's asssume you have a route like this
from("activemq:somequeue).to("bean:someBean")
or
from("activemq:somequeue).process(exchange -> {})
All you got to do it put a browse endpoint in between like this
from("activemq:somequeue).to("browse:someHandler").to("bean:someBean")
Then write a class like this
#Component
public class BrowseQueue {
#Autowired
CamelContext camelContext;
public void inspect() {
BrowsableEndpoint browse = camelContext.getEndpoint("browse:someHandler", BrowsableEndpoint.class);
List<Exchange> exchanges = browse.getExchanges();
for (Exchange exchange : exchanges) {
......
}
}
}

Apache Camel - Create mock endpoint to listen to messages sent from within a processor

I have a route as follows:
from(fromEndpoint).routeId("ticketRoute")
.log("Received Tickets : ${body}")
.doTry()
.process(exchange -> {
List<TradeTicketDto> ticketDtos = (List<TradeTicketDto>) exchange.getIn().getBody();
ticketDtos.stream()
.forEach(t -> solaceMessagePublisher.sendAsText("BOOKINGSERVICE.TICKET.UPDATED", t));
ticketToTradeConverter.convert(ticketDtos)
.forEach(t -> solaceMessagePublisher.sendAsText("BOOKINGSERVICE.TRADE.UPDATED", t));
}).doCatch(java.lang.RuntimeException.class)
.log(exceptionMessage().toString() + " --> ${body}");
solaceMessagePublisher is a utility class in application which performs some action on passed object (second argument) and finally converts it to json string and sends to a jms topic (first argument).
SolaceMessagePublisher.java
public void sendAsText(final String destinationKey, Object payload) {
LOGGER.debug("Sending object as text to %s",destinationKey);
String destinationValue = null;
if (StringUtils.isNotEmpty(destinationKey)) {
destinationValue = properties.getProperty(destinationKey);
}
LOGGER.debug("Identified Destination Value = %s from key %s", destinationValue,destinationKey);
if (StringUtils.isEmpty(destinationValue)) {
throw new BaseServiceException("Invalid destination for message");
}
sendAsTextToDestination(destinationValue, payload);
}
public void sendAsTextToDestination(final String destinationValue, Object payload) {
if (payload == null) {
LOGGER.debug(" %s %s",EMPTY_PAYLOAD_ERROR_MESSAGE, destinationValue);
return;
}
final String message = messageCreator.createMessageEnvelopAsJSON(payload, ContextProvider.getUserInContext());
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Created message = " + message);
}
jmsTemplate.send(destinationValue, new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
LOGGER.debug("Creating JMS Text Message");
return session.createTextMessage(message);
}
});
}
I am having a problem in creating a mock endpoint to listen to messages sent to this topic. Question is how to listen to the messages sent to a topic which is out of camel context?
I have tried in my Test using mock:jms:endpoint. It doesn't work.
My Test is as below
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { SiteMain.class })
public class TicketRouteCamelTest extends CamelSpringTestSupport{
#Autowired
protected BaseMessageEnvelopCreator messageCreator;
private static final String MOCK_TICKET_UPDATED_QUEUE = "direct:mockTicketUpdated";
#Before
public void configureMockEndpoints() throws Exception {
//mock input
final AdviceWithRouteBuilder mockRouteAdvice = new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
replaceFromWith(MOCK_TICKET_UPDATED_QUEUE);
}
};
context().getRouteDefinition("ticketRoute").adviceWith(context(), mockRouteAdvice);
}
#Test
public void testTicketRouteWithListOfTickets() throws Exception {
//create test data
TradeTicketDto tradeTicketDto = TradeTestDataHelper.getTradeTicketDto();
//create an exchange and set its body with test data
List<TradeTicketDto> list = new ArrayList<>();
list.add(tradeTicketDto);
list.add(tradeTicketDto);
Exchange requestExchange = ExchangeBuilder.anExchange(context()).build();
requestExchange.getIn().setBody(list);
//create assert on the mock endpoints
MockEndpoint mockTicketUpdatedEndpoint = getMockEndpoint("mock:DEV/bookingservice/ticket/updated");
mockTicketUpdatedEndpoint.expectedBodiesReceived(
messageCreator.createMessageEnvelopAsJSON(list.get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1), ContextProvider.getUserInContext()) );
MockEndpoint mockTradeUpdatedEndpoint = getMockEndpoint("mock:DEV/bookingservice/trade/updated");
mockTradeUpdatedEndpoint.expectedBodiesReceived(
messageCreator.createMessageEnvelopAsJSON(list.get(0).getTicketInstruments().get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(0).getTicketInstruments().get(1), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1).getTicketInstruments().get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1).getTicketInstruments().get(1), ContextProvider.getUserInContext()));
//send test exchange to request mock endpoint
template.send(MOCK_TICKET_UPDATED_QUEUE, requestExchange);
//test the asserts
assertMockEndpointsSatisfied();
}
}
On running test actual bodies received on mockendpont is 0
Mock is NOT a queue for consumers/producers to exchange data. Its a sink for testing purpose where you can setup expectations on the mock.
If you want to simulate a JMS via some kind of other means, then take a look at the stub component: http://camel.apache.org/stub
Its also listed in the bottom of the testing docs at: http://camel.apache.org/testing

Accept Multipart file upload as camel restlet or cxfrs endpoint

I am looking to implement a route where reslet/cxfrs end point will accept file as multipart request and process. (Request may have some JSON data as well.
Thanks in advance.
Regards.
[EDIT]
Have tried following code. Also tried sending file using curl. I can see file related info in headers and debug output, but not able to retrieve attachment.
from("servlet:///hello").process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Message in = exchange.getIn();
StringBuffer v = new StringBuffer();
HttpServletRequest request = (HttpServletRequest) in
.getHeaders().get(Exchange.HTTP_SERVLET_REQUEST);
DiskFileItemFactory diskFile = new DiskFileItemFactory();
FileItemFactory factory = diskFile;
ServletFileUpload upload = new ServletFileUpload(factory);
List items = upload.parseRequest(request);
.....
curl :
curl -vvv -i -X POST -H "Content-Type: multipart/form-data" -F "image=#/Users/navaltiger/1.jpg; type=image/jpg" http://:8080/JettySample/camel/hello
following code works (but can't use as it embeds jetty, and we would like to deploy it on tomcat/weblogic)
public void configure() throws Exception {
// getContext().getProperties().put("CamelJettyTempDir", "target");
getContext().setStreamCaching(true);
getContext().setTracing(true);
from("jetty:///test").process(new Processor() {
// from("servlet:///hello").process(new Processor() {
public void process(Exchange exchange) throws Exception {
String body = exchange.getIn().getBody(String.class);
HttpServletRequest request = exchange.getIn().getBody(
HttpServletRequest.class);
StringBuffer v = new StringBuffer();
// byte[] picture = (request.getParameter("image")).getBytes();
v.append("\n Printing All Request Parameters From HttpSerlvetRequest: \n+"+body +" \n\n");
Enumeration<String> requestParameters = request
.getParameterNames();
while (requestParameters.hasMoreElements()) {
String paramName = (String) requestParameters.nextElement();
v.append("\n Request Paramter Name: " + paramName
+ ", Value - " + request.getParameter(paramName));
}
I had a similar problem and managed to resolve inspired by the answer of brentos. The rest endpoint in my case is defined via xml:
<restContext id="UploaderServices" xmlns="http://camel.apache.org/schema/spring">
<rest path="/uploader">
<post bindingMode="off" uri="/upload" produces="application/json">
<to uri="bean:UploaderService?method=uploadData"/>
</post>
</rest>
</restContext>
I had to use "bindingMode=off" to disable xml/json unmarshalling because the HttpRequest body contains multipart data (json/text+file) and obviously the standard unmarshaling process was unable to process the request because it's expecting a string in the body and not a multipart payload.
The file and other parameters are sent from a front end that uses the file upload angular module: https://github.com/danialfarid/ng-file-upload
To solve CORS problems I had to add a CORSFilter filter in the web.xml like the one here:
public class CORSFilter implements Filter {
#Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException,
ServletException {
HttpServletResponse httpResp = (HttpServletResponse) resp;
HttpServletRequest httpReq = (HttpServletRequest) req;
httpResp.setHeader("Access-Control-Allow-Methods", "GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, PATCH");
httpResp.setHeader("Access-Control-Allow-Origin", "*");
if (httpReq.getMethod().equalsIgnoreCase("OPTIONS")) {
httpResp.setHeader("Access-Control-Allow-Headers",
httpReq.getHeader("Access-Control-Request-Headers"));
}
chain.doFilter(req, resp);
}
#Override
public void init(FilterConfig arg0) throws ServletException {
}
#Override
public void destroy() {
}
}
Also, I had to modify a little bit the unmarshaling part:
public String uploadData(Message exchange) {
String contentType=(String) exchange.getIn().getHeader(Exchange.CONTENT_TYPE);
MediaType mediaType = MediaType.valueOf(contentType); //otherwise the boundary parameter is lost
InputRepresentation representation = new InputRepresentation(exchange
.getBody(InputStream.class), mediaType);
try {
List<FileItem> items = new RestletFileUpload(
new DiskFileItemFactory())
.parseRepresentation(representation);
for (FileItem item : items) {
if (!item.isFormField()) {
InputStream inputStream = item.getInputStream();
// Path destination = Paths.get("MyFile.jpg");
// Files.copy(inputStream, destination,
// StandardCopyOption.REPLACE_EXISTING);
System.out.println("found file in request:" + item);
}else{
System.out.println("found string in request:" + new String(item.get(), "UTF-8"));
}
}
} catch (Exception e) {
e.printStackTrace();
}
return "200";
}
I'm using the Camel REST DSL with Restlet and was able to get file uploads working with the following code.
rest("/images").description("Image Upload Service")
.consumes("multipart/form-data").produces("application/json")
.post().description("Uploads image")
.to("direct:uploadImage");
from("direct:uploadImage")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
MediaType mediaType =
exchange.getIn().getHeader(Exchange.CONTENT_TYPE, MediaType.class);
InputRepresentation representation =
new InputRepresentation(
exchange.getIn().getBody(InputStream.class), mediaType);
try {
List<FileItem> items =
new RestletFileUpload(
new DiskFileItemFactory()).parseRepresentation(representation);
for (FileItem item : items) {
if (!item.isFormField()) {
InputStream inputStream = item.getInputStream();
Path destination = Paths.get("MyFile.jpg");
Files.copy(inputStream, destination,
StandardCopyOption.REPLACE_EXISTING);
}
}
} catch (FileUploadException | IOException e) {
e.printStackTrace();
}
}
});
you can do this with restdsl even if you are not using restlet (exemple jetty) for your restdsl component.
you need to turn restdinding of first for that route and reate two classes to handle the multipart that is in your body.
you need two classes :
DWRequestContext
DWFileUpload
and then you use them in your custom processor
here is the code :
DWRequestContext.java
import org.apache.camel.Exchange;
import org.apache.commons.fileupload.RequestContext;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
public class DWRequestContext implements RequestContext {
private Exchange exchange;
public DWRequestContext(Exchange exchange) {
this.exchange = exchange;
}
public String getCharacterEncoding() {
return StandardCharsets.UTF_8.toString();
}
//could compute here (we have stream cache enabled)
public int getContentLength() {
return (int) -1;
}
public String getContentType() {
return exchange.getIn().getHeader("Content-Type").toString();
}
public InputStream getInputStream() throws IOException {
return this.exchange.getIn().getBody(InputStream.class);
}
}
DWFileUpload.java
import org.apache.camel.Exchange;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.FileItemFactory;
import org.apache.commons.fileupload.FileUpload;
import org.apache.commons.fileupload.FileUploadException;
import java.util.List;
public class DWFileUpload extends
FileUpload {
public DWFileUpload() {
super();
}
public DWFileUpload(FileItemFactory fileItemFactory) {
super(fileItemFactory);
}
public List<FileItem> parseInputStream(Exchange exchange)
throws FileUploadException {
return parseRequest(new DWRequestContext(exchange));
}
}
you can define your processor like this:
routeDefinition.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
// Create a factory for disk-based file items
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setRepository(new File(System.getProperty("java.io.tmpdir")));
DWFileUpload upload = new DWFileUpload(factory);
java.util.List<FileItem> items = upload.parseInputStream(exchange);
//here I assume I have only one, but I could split it here somehow and link them to camel properties...
//with this, the first file sended with your multipart replaces the body
// of the exchange for the next processor to handle it
exchange.getIn().setBody(items.get(0).getInputStream());
}
});
I stumbled into the same requirement of having to consume a multipart request (containing file data including binary) through Apache Camel Restlet component.
Even though 2.17.x is out, since my project was part of a wider framework / application, I had to be using version 2.12.4.
Initially, my solution drew a lot from restlet-jdbc example yielded data in exchange that although was successfully retrieving text files but I was unable to retrieve correct binary content.
I attempted to dump the data directly into a file to inspect the content using following code (abridged).
from("restlet:/upload?restletMethod=POST")
.to("direct:save-files");
from("direct:save-files")
.process(new org.apache.camel.Processor(){
public void process(org.apache.camel.Exchange exchange){
/*
* Code to sniff exchange content
*/
}
})
.to("file:///C:/<path to a folder>");
;
I used org.apache.commons.fileupload.MultipartStream from apache fileuplaod library to write following utility class to parse Multipart request from a file. It worked successfully when the output of a mulitpart request from Postman was fed to it. However, failed to parse content of the file created by Camel (even through to eyes content of both files looked similar).
public class MultipartParserFileCreator{
public static final String DELIMITER = "\\r?\\n";
public static void main(String[] args) throws Exception {
// taking it from the content-type in exchange
byte[] boundary = "------5lXVNrZvONBWFXxd".getBytes();
FileInputStream fis = new FileInputStream(new File("<path-to-file>"));
extractFile(fis, boundary);
}
public static void extractFile(InputStream is, byte[] boundary) throws Exception {
MultipartStream multipartStream = new MultipartStream(is, boundary, 1024*4, null);
boolean nextPart = multipartStream.skipPreamble();
while (nextPart) {
String headers = multipartStream.readHeaders();
if(isFileContent(headers)) {
String filename = getFileName(headers);
File file = new File("<dir-where-file-created>"+filename);
if(!file.exists()) {
file.createNewFile();
}
FileOutputStream fos = new FileOutputStream(file);
multipartStream.readBodyData(fos);
fos.flush();
fos.close();
}else {
multipartStream.readBodyData(System.out);
}
nextPart = multipartStream.readBoundary();
}
}
public static String[] getContentDispositionTokens(String headersJoined) {
String[] headers = headersJoined.split(DELIMITER, -1);
for(String header: headers) {
System.out.println("Processing header: "+header);
if(header != null && header.startsWith("Content-Disposition:")) {
return header.split(";");
}
}
throw new RuntimeException(
String.format("[%s] header not found in supplied headers [%s]", "Content-Disposition:", headersJoined));
}
public static boolean isFileContent(String header) {
String[] tokens = getContentDispositionTokens(header);
for (String token : tokens) {
if (token.trim().startsWith("filename")) {
return true;
}
}
return false;
}
public static String getFileName(String header) {
String[] tokens = getContentDispositionTokens(header);
for (String token : tokens) {
if (token.trim().startsWith("filename")) {
String filename = token.substring(token.indexOf("=") + 2, token.length()-1);
System.out.println("fileName is " + filename);
return filename;
}
}
return null;
}
}
On debugging through the Camel code, I noticed that at one stage Camel is converting the entire content into String. After a point I had to stop pursuing this approach as there was very little on net applicable for version 2.12.4 and my work was not going anywhere.
Finally, I resorted to following solution
Write an implementation of HttpServletRequestWrapper to allow
multiple read of input stream. One can get an idea from
How to read request.getInputStream() multiple times
Create a filter that uses the above to wrap HttpServletRequest object, reads and extract the file to a directory Convenient way to parse incoming multipart/form-data parameters in a Servlet and attach the path to the request using request.setAttribute() method. With web.xml, configure this filter on restlet servlet
In the process method of camel route, type cast the
exchange.getIn().getBody() in HttpServletRequest object, extract the
attribute (path) use it to read the file as ByteStreamArray for
further processing
Not the cleanest, but I could achieve the objective.

Configuration of custom Log4j appender doesn't work

I have created a custom appender (will be used for Linux). For creating of this appender I used this article How write custom log4j appender
public class SolrAppender extends AppenderSkeleton {
private String path = null;
public void setPath(String path) { this.path = path; }
public String getPath() { return this.path; }
#Override
public boolean requiresLayout() {
return true;
}
#Override
public void close() {
}
#Override
public void activateOptions() {
super.activateOptions();
}
#Override
public synchronized void append(LoggingEvent event) {
SolrServer server = new HttpSolrServer(path);
SolrInputDocument document = new SolrInputDocument();
//some logic
UpdateResponse response = server.add(document);
server.commit();
}
Configuration of this appender is
# Solr appender
log4j.appender.SOLR = ricardo.solr.appender.QueryParser.SolrAppender
log4j.appender.SOLR.layout = org.apache.log4j.SimpleLayout
log4j.appender.SOLR.path = http://XX.XXX.XX.XX:8985/application/core
Appender works correct if path is hardcoded. Why path is not set via configuration?
From what I've seen so far, the name of a property of an appender, in the configuration, should start with an upper case character, so 'Path' instead of 'path', so you should use:
log4j.appender.SOLR.Path = http://XX.XXX.XX.XX:8985/application/core
Not sure why it's not the case for 'layout', though.

Resources