About the Codename One Websocket cn1lib, I saw your lesson on the Whatsapp clone, however the code you proposed is for a complete app. Could you provide a simpler self-enclosed example of use of Websocket cn1lib with client-side code (Codename One code to send and receive messages) and server-side code (Spring Boot Java 8 to receive and send messages)?
I’m particulary interested in a simple Spring Boot example that interacts with Codename One, as start point to better understand and learn websocket.
Thank you
It would look roughly like this on the server:
public class WebSocketServer extends TextWebSocketHandler {
private static final Object LOCK = new Object();
private Map<String, WebSocketSession> sessions = new HashMap<>();
#Override
protected void handleTextMessage(WebSocketSession session,
TextMessage message) throws Exception {
Gson gson = new Gson();
MyDTO parsed = gson.fromJson(message.getPayload(), MyDTO.class);
// ... do stuff with incoming message
synchronized(LOCK) {
if(!sessions.contains(parsed.getId()) {
sessions.put(parsed.getId(), session);
}
}
}
public boolean sendMessage(String destId, String json) {
Session s = null;
synchronized(LOCK) {
s = sessions.get(destId);
}
if (s != null && s.isOpen()) {
try {
s.sendMessage(new TextMessage(s));
return true;
} catch (IOException e) {
synchronized(LOCK) {
sessions.remove(destId);
}
}
}
return false;
}
}
Related
I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false
I have a route as follows:
from(fromEndpoint).routeId("ticketRoute")
.log("Received Tickets : ${body}")
.doTry()
.process(exchange -> {
List<TradeTicketDto> ticketDtos = (List<TradeTicketDto>) exchange.getIn().getBody();
ticketDtos.stream()
.forEach(t -> solaceMessagePublisher.sendAsText("BOOKINGSERVICE.TICKET.UPDATED", t));
ticketToTradeConverter.convert(ticketDtos)
.forEach(t -> solaceMessagePublisher.sendAsText("BOOKINGSERVICE.TRADE.UPDATED", t));
}).doCatch(java.lang.RuntimeException.class)
.log(exceptionMessage().toString() + " --> ${body}");
solaceMessagePublisher is a utility class in application which performs some action on passed object (second argument) and finally converts it to json string and sends to a jms topic (first argument).
SolaceMessagePublisher.java
public void sendAsText(final String destinationKey, Object payload) {
LOGGER.debug("Sending object as text to %s",destinationKey);
String destinationValue = null;
if (StringUtils.isNotEmpty(destinationKey)) {
destinationValue = properties.getProperty(destinationKey);
}
LOGGER.debug("Identified Destination Value = %s from key %s", destinationValue,destinationKey);
if (StringUtils.isEmpty(destinationValue)) {
throw new BaseServiceException("Invalid destination for message");
}
sendAsTextToDestination(destinationValue, payload);
}
public void sendAsTextToDestination(final String destinationValue, Object payload) {
if (payload == null) {
LOGGER.debug(" %s %s",EMPTY_PAYLOAD_ERROR_MESSAGE, destinationValue);
return;
}
final String message = messageCreator.createMessageEnvelopAsJSON(payload, ContextProvider.getUserInContext());
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Created message = " + message);
}
jmsTemplate.send(destinationValue, new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
LOGGER.debug("Creating JMS Text Message");
return session.createTextMessage(message);
}
});
}
I am having a problem in creating a mock endpoint to listen to messages sent to this topic. Question is how to listen to the messages sent to a topic which is out of camel context?
I have tried in my Test using mock:jms:endpoint. It doesn't work.
My Test is as below
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = { SiteMain.class })
public class TicketRouteCamelTest extends CamelSpringTestSupport{
#Autowired
protected BaseMessageEnvelopCreator messageCreator;
private static final String MOCK_TICKET_UPDATED_QUEUE = "direct:mockTicketUpdated";
#Before
public void configureMockEndpoints() throws Exception {
//mock input
final AdviceWithRouteBuilder mockRouteAdvice = new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
replaceFromWith(MOCK_TICKET_UPDATED_QUEUE);
}
};
context().getRouteDefinition("ticketRoute").adviceWith(context(), mockRouteAdvice);
}
#Test
public void testTicketRouteWithListOfTickets() throws Exception {
//create test data
TradeTicketDto tradeTicketDto = TradeTestDataHelper.getTradeTicketDto();
//create an exchange and set its body with test data
List<TradeTicketDto> list = new ArrayList<>();
list.add(tradeTicketDto);
list.add(tradeTicketDto);
Exchange requestExchange = ExchangeBuilder.anExchange(context()).build();
requestExchange.getIn().setBody(list);
//create assert on the mock endpoints
MockEndpoint mockTicketUpdatedEndpoint = getMockEndpoint("mock:DEV/bookingservice/ticket/updated");
mockTicketUpdatedEndpoint.expectedBodiesReceived(
messageCreator.createMessageEnvelopAsJSON(list.get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1), ContextProvider.getUserInContext()) );
MockEndpoint mockTradeUpdatedEndpoint = getMockEndpoint("mock:DEV/bookingservice/trade/updated");
mockTradeUpdatedEndpoint.expectedBodiesReceived(
messageCreator.createMessageEnvelopAsJSON(list.get(0).getTicketInstruments().get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(0).getTicketInstruments().get(1), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1).getTicketInstruments().get(0), ContextProvider.getUserInContext()),
messageCreator.createMessageEnvelopAsJSON(list.get(1).getTicketInstruments().get(1), ContextProvider.getUserInContext()));
//send test exchange to request mock endpoint
template.send(MOCK_TICKET_UPDATED_QUEUE, requestExchange);
//test the asserts
assertMockEndpointsSatisfied();
}
}
On running test actual bodies received on mockendpont is 0
Mock is NOT a queue for consumers/producers to exchange data. Its a sink for testing purpose where you can setup expectations on the mock.
If you want to simulate a JMS via some kind of other means, then take a look at the stub component: http://camel.apache.org/stub
Its also listed in the bottom of the testing docs at: http://camel.apache.org/testing
I'm currently working on creating chatbots using ibm watson conversation. I have created my chatbot in bluemix with the necessary intents and dialog. NowI want to integrate it with a java application. Can anyone help me on this.?
import com.ibm.watson.developer_cloud.conversation.v1.Conversation;
import com.ibm.watson.developer_cloud.conversation.v1.model.*;
public class Conversation {
public static void main( String[] args ) throws Exception
{
String input ="Hello";
//To Call Conversation Api to print its welcome Message.
response = conversationAPI("",context);
context = response.getContext();
response = conversationAPI(input,context);
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static MessageResponse conversationAPI(String input,Context context)
{
Conversation service = new Conversation("");
service.setUsernameAndPassword("", ""); //
MessageOptions newMessage = new MessageOptions.Builder().workspaceId("").input(new InputData.Builder(input).build()).context(context).build();
MessageResponse response = service.message(newMessage).execute();
int TextSize= response.getOutput().getText().size();
for(int x=0;x<TextSize;x++)
{
System.out.println(response.getOutput().getText().get(x));
}
return response;
}
}
I'm using CXF 3.1.5, How can I apply proxy settings and trust or ignore SSL certificate when sending out request?
I use CXF in the following two ways.
Using org.apache.cxf.bus to get WSDL definition from IdP or SP, bus.getExtension(WSDLManager.class).getDefinition().
Using org.apache.cxf.ws.security.trust.STSClient to request Security Token.stsClient.requestSecurityToken()
and I think I need to use code for configuration instead of configuration file as my callers send me those information about proxy and SSL certificates.
thanks a lot!
After further research, I found something.
to resolve the first problem, add the following code:
ResourceManager extension = bus.getExtension(ResourceManager.class);
extension.addResourceResolver(new ResourceResolver() {
#Override
public <T> T resolve(String resourceName, Class<T> resourceType) {
return null;
}
#Override
public InputStream getAsStream(String name) {
if (!name.startsWith("https")) {
return null;
}
org.apache.http.client.HttpClient httpClient = HttpUtils.createHttpClient(setting);
HttpGet httpGet = new HttpGet(name);
try {
HttpResponse httpResponse = httpClient.execute(httpGet);
return httpResponse.getEntity().getContent();
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
});
then I can get the WSDL definition, but I still don't know how to fix the second problem, I'm trying to use HTTPConduit((HTTPConduit)stsClient.getClient().getConduit()), but when call stsClient.getClient(), cxf will try to load those XML Schema which will lead to the following exception:
org.apache.cxf.service.factory.ServiceConstructionException: Failed to create service.
at org.apache.cxf.wsdl11.WSDLServiceFactory.create(WSDLServiceFactory.java:170)
at org.apache.cxf.ws.security.trust.AbstractSTSClient.createClient(AbstractSTSClient.java:657)
at org.apache.cxf.ws.security.trust.AbstractSTSClient.getClient(AbstractSTSClient.java:480)
...
Caused by: org.apache.ws.commons.schema.XmlSchemaException: Unable to locate imported document at 'https://...&xsd=ws-trust-1.3.xsd', relative to 'https://...#types1'.
at org.apache.cxf.catalog.CatalogXmlSchemaURIResolver.resolveEntity(CatalogXmlSchemaURIResolver.java:76)
at org.apache.ws.commons.schema.SchemaBuilder.resolveXmlSchema(SchemaBuilder.java:684)
at org.apache.ws.commons.schema.SchemaBuilder.handleImport(SchemaBuilder.java:538)
at org.apache.ws.commons.schema.SchemaBuilder.handleSchemaElementChild(SchemaBuilder.java:1516)
at org.apache.ws.commons.schema.SchemaBuilder.handleXmlSchemaElement(SchemaBuilder.java:659)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:551)
at org.apache.cxf.common.xmlschema.SchemaCollection.read(SchemaCollection.java:129)
at org.apache.cxf.wsdl11.SchemaUtil.extractSchema(SchemaUtil.java:140)
at org.apache.cxf.wsdl11.SchemaUtil.getSchemas(SchemaUtil.java:73)
at org.apache.cxf.wsdl11.SchemaUtil.getSchemas(SchemaUtil.java:65)
at org.apache.cxf.wsdl11.SchemaUtil.getSchemas(SchemaUtil.java:60)
at org.apache.cxf.wsdl11.WSDLServiceBuilder.getSchemas(WSDLServiceBuilder.java:378)
at org.apache.cxf.wsdl11.WSDLServiceBuilder.buildServices(WSDLServiceBuilder.java:345)
at org.apache.cxf.wsdl11.WSDLServiceBuilder.buildServices(WSDLServiceBuilder.java:209)
at org.apache.cxf.wsdl11.WSDLServiceFactory.create(WSDLServiceFactory.java:162)
... 32 more
Found a solution:
implements HTTPConduitFactory and put it into bus.
bus.setExtension(new MyHTTPConduitFactory(setting), HTTPConduitFactory.class)
In the Factory class:
#Override
public HTTPConduit createConduit(HTTPTransportFactory f, Bus b, EndpointInfo localInfo,
EndpointReferenceType target) throws IOException {
return new MyHTTPConduit(settings, f, b, localInfo, target);
}
MyHTTPConduit extends URLConnectionHTTPConduit
To handle SSL certificates.
TLSClientParameters parameters = new TLSClientParameters();
parameters.setDisableCNCheck(settings.isTurnOffHostVerifier());
if (settings.isIgnoreServerCertificate()) {
parameters.setTrustManagers(new TrustManager[] { new TrustAllCertsTrustManager() });
} else {
TrustManagerFactory factory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
factory.init(settings.getTrustStore());
parameters.setTrustManagers(factory.getTrustManagers());
}
this.setTlsClientParameters(parameters);
TrustAllCertsTrustManager class
private class TrustAllCertsTrustManager implements X509TrustManager {
#Override
public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {
}
#Override
public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {
}
#Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
}
To handle proxy.
HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
httpClientPolicy.setProxyServer(proxy.getHostName());
httpClientPolicy.setProxyServerPort(proxy.getPort());
this.setClient(httpClientPolicy);
There are some examples here: http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html
I am looking to implement a route where reslet/cxfrs end point will accept file as multipart request and process. (Request may have some JSON data as well.
Thanks in advance.
Regards.
[EDIT]
Have tried following code. Also tried sending file using curl. I can see file related info in headers and debug output, but not able to retrieve attachment.
from("servlet:///hello").process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
Message in = exchange.getIn();
StringBuffer v = new StringBuffer();
HttpServletRequest request = (HttpServletRequest) in
.getHeaders().get(Exchange.HTTP_SERVLET_REQUEST);
DiskFileItemFactory diskFile = new DiskFileItemFactory();
FileItemFactory factory = diskFile;
ServletFileUpload upload = new ServletFileUpload(factory);
List items = upload.parseRequest(request);
.....
curl :
curl -vvv -i -X POST -H "Content-Type: multipart/form-data" -F "image=#/Users/navaltiger/1.jpg; type=image/jpg" http://:8080/JettySample/camel/hello
following code works (but can't use as it embeds jetty, and we would like to deploy it on tomcat/weblogic)
public void configure() throws Exception {
// getContext().getProperties().put("CamelJettyTempDir", "target");
getContext().setStreamCaching(true);
getContext().setTracing(true);
from("jetty:///test").process(new Processor() {
// from("servlet:///hello").process(new Processor() {
public void process(Exchange exchange) throws Exception {
String body = exchange.getIn().getBody(String.class);
HttpServletRequest request = exchange.getIn().getBody(
HttpServletRequest.class);
StringBuffer v = new StringBuffer();
// byte[] picture = (request.getParameter("image")).getBytes();
v.append("\n Printing All Request Parameters From HttpSerlvetRequest: \n+"+body +" \n\n");
Enumeration<String> requestParameters = request
.getParameterNames();
while (requestParameters.hasMoreElements()) {
String paramName = (String) requestParameters.nextElement();
v.append("\n Request Paramter Name: " + paramName
+ ", Value - " + request.getParameter(paramName));
}
I had a similar problem and managed to resolve inspired by the answer of brentos. The rest endpoint in my case is defined via xml:
<restContext id="UploaderServices" xmlns="http://camel.apache.org/schema/spring">
<rest path="/uploader">
<post bindingMode="off" uri="/upload" produces="application/json">
<to uri="bean:UploaderService?method=uploadData"/>
</post>
</rest>
</restContext>
I had to use "bindingMode=off" to disable xml/json unmarshalling because the HttpRequest body contains multipart data (json/text+file) and obviously the standard unmarshaling process was unable to process the request because it's expecting a string in the body and not a multipart payload.
The file and other parameters are sent from a front end that uses the file upload angular module: https://github.com/danialfarid/ng-file-upload
To solve CORS problems I had to add a CORSFilter filter in the web.xml like the one here:
public class CORSFilter implements Filter {
#Override
public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException,
ServletException {
HttpServletResponse httpResp = (HttpServletResponse) resp;
HttpServletRequest httpReq = (HttpServletRequest) req;
httpResp.setHeader("Access-Control-Allow-Methods", "GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, PATCH");
httpResp.setHeader("Access-Control-Allow-Origin", "*");
if (httpReq.getMethod().equalsIgnoreCase("OPTIONS")) {
httpResp.setHeader("Access-Control-Allow-Headers",
httpReq.getHeader("Access-Control-Request-Headers"));
}
chain.doFilter(req, resp);
}
#Override
public void init(FilterConfig arg0) throws ServletException {
}
#Override
public void destroy() {
}
}
Also, I had to modify a little bit the unmarshaling part:
public String uploadData(Message exchange) {
String contentType=(String) exchange.getIn().getHeader(Exchange.CONTENT_TYPE);
MediaType mediaType = MediaType.valueOf(contentType); //otherwise the boundary parameter is lost
InputRepresentation representation = new InputRepresentation(exchange
.getBody(InputStream.class), mediaType);
try {
List<FileItem> items = new RestletFileUpload(
new DiskFileItemFactory())
.parseRepresentation(representation);
for (FileItem item : items) {
if (!item.isFormField()) {
InputStream inputStream = item.getInputStream();
// Path destination = Paths.get("MyFile.jpg");
// Files.copy(inputStream, destination,
// StandardCopyOption.REPLACE_EXISTING);
System.out.println("found file in request:" + item);
}else{
System.out.println("found string in request:" + new String(item.get(), "UTF-8"));
}
}
} catch (Exception e) {
e.printStackTrace();
}
return "200";
}
I'm using the Camel REST DSL with Restlet and was able to get file uploads working with the following code.
rest("/images").description("Image Upload Service")
.consumes("multipart/form-data").produces("application/json")
.post().description("Uploads image")
.to("direct:uploadImage");
from("direct:uploadImage")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
MediaType mediaType =
exchange.getIn().getHeader(Exchange.CONTENT_TYPE, MediaType.class);
InputRepresentation representation =
new InputRepresentation(
exchange.getIn().getBody(InputStream.class), mediaType);
try {
List<FileItem> items =
new RestletFileUpload(
new DiskFileItemFactory()).parseRepresentation(representation);
for (FileItem item : items) {
if (!item.isFormField()) {
InputStream inputStream = item.getInputStream();
Path destination = Paths.get("MyFile.jpg");
Files.copy(inputStream, destination,
StandardCopyOption.REPLACE_EXISTING);
}
}
} catch (FileUploadException | IOException e) {
e.printStackTrace();
}
}
});
you can do this with restdsl even if you are not using restlet (exemple jetty) for your restdsl component.
you need to turn restdinding of first for that route and reate two classes to handle the multipart that is in your body.
you need two classes :
DWRequestContext
DWFileUpload
and then you use them in your custom processor
here is the code :
DWRequestContext.java
import org.apache.camel.Exchange;
import org.apache.commons.fileupload.RequestContext;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
public class DWRequestContext implements RequestContext {
private Exchange exchange;
public DWRequestContext(Exchange exchange) {
this.exchange = exchange;
}
public String getCharacterEncoding() {
return StandardCharsets.UTF_8.toString();
}
//could compute here (we have stream cache enabled)
public int getContentLength() {
return (int) -1;
}
public String getContentType() {
return exchange.getIn().getHeader("Content-Type").toString();
}
public InputStream getInputStream() throws IOException {
return this.exchange.getIn().getBody(InputStream.class);
}
}
DWFileUpload.java
import org.apache.camel.Exchange;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.FileItemFactory;
import org.apache.commons.fileupload.FileUpload;
import org.apache.commons.fileupload.FileUploadException;
import java.util.List;
public class DWFileUpload extends
FileUpload {
public DWFileUpload() {
super();
}
public DWFileUpload(FileItemFactory fileItemFactory) {
super(fileItemFactory);
}
public List<FileItem> parseInputStream(Exchange exchange)
throws FileUploadException {
return parseRequest(new DWRequestContext(exchange));
}
}
you can define your processor like this:
routeDefinition.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
// Create a factory for disk-based file items
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setRepository(new File(System.getProperty("java.io.tmpdir")));
DWFileUpload upload = new DWFileUpload(factory);
java.util.List<FileItem> items = upload.parseInputStream(exchange);
//here I assume I have only one, but I could split it here somehow and link them to camel properties...
//with this, the first file sended with your multipart replaces the body
// of the exchange for the next processor to handle it
exchange.getIn().setBody(items.get(0).getInputStream());
}
});
I stumbled into the same requirement of having to consume a multipart request (containing file data including binary) through Apache Camel Restlet component.
Even though 2.17.x is out, since my project was part of a wider framework / application, I had to be using version 2.12.4.
Initially, my solution drew a lot from restlet-jdbc example yielded data in exchange that although was successfully retrieving text files but I was unable to retrieve correct binary content.
I attempted to dump the data directly into a file to inspect the content using following code (abridged).
from("restlet:/upload?restletMethod=POST")
.to("direct:save-files");
from("direct:save-files")
.process(new org.apache.camel.Processor(){
public void process(org.apache.camel.Exchange exchange){
/*
* Code to sniff exchange content
*/
}
})
.to("file:///C:/<path to a folder>");
;
I used org.apache.commons.fileupload.MultipartStream from apache fileuplaod library to write following utility class to parse Multipart request from a file. It worked successfully when the output of a mulitpart request from Postman was fed to it. However, failed to parse content of the file created by Camel (even through to eyes content of both files looked similar).
public class MultipartParserFileCreator{
public static final String DELIMITER = "\\r?\\n";
public static void main(String[] args) throws Exception {
// taking it from the content-type in exchange
byte[] boundary = "------5lXVNrZvONBWFXxd".getBytes();
FileInputStream fis = new FileInputStream(new File("<path-to-file>"));
extractFile(fis, boundary);
}
public static void extractFile(InputStream is, byte[] boundary) throws Exception {
MultipartStream multipartStream = new MultipartStream(is, boundary, 1024*4, null);
boolean nextPart = multipartStream.skipPreamble();
while (nextPart) {
String headers = multipartStream.readHeaders();
if(isFileContent(headers)) {
String filename = getFileName(headers);
File file = new File("<dir-where-file-created>"+filename);
if(!file.exists()) {
file.createNewFile();
}
FileOutputStream fos = new FileOutputStream(file);
multipartStream.readBodyData(fos);
fos.flush();
fos.close();
}else {
multipartStream.readBodyData(System.out);
}
nextPart = multipartStream.readBoundary();
}
}
public static String[] getContentDispositionTokens(String headersJoined) {
String[] headers = headersJoined.split(DELIMITER, -1);
for(String header: headers) {
System.out.println("Processing header: "+header);
if(header != null && header.startsWith("Content-Disposition:")) {
return header.split(";");
}
}
throw new RuntimeException(
String.format("[%s] header not found in supplied headers [%s]", "Content-Disposition:", headersJoined));
}
public static boolean isFileContent(String header) {
String[] tokens = getContentDispositionTokens(header);
for (String token : tokens) {
if (token.trim().startsWith("filename")) {
return true;
}
}
return false;
}
public static String getFileName(String header) {
String[] tokens = getContentDispositionTokens(header);
for (String token : tokens) {
if (token.trim().startsWith("filename")) {
String filename = token.substring(token.indexOf("=") + 2, token.length()-1);
System.out.println("fileName is " + filename);
return filename;
}
}
return null;
}
}
On debugging through the Camel code, I noticed that at one stage Camel is converting the entire content into String. After a point I had to stop pursuing this approach as there was very little on net applicable for version 2.12.4 and my work was not going anywhere.
Finally, I resorted to following solution
Write an implementation of HttpServletRequestWrapper to allow
multiple read of input stream. One can get an idea from
How to read request.getInputStream() multiple times
Create a filter that uses the above to wrap HttpServletRequest object, reads and extract the file to a directory Convenient way to parse incoming multipart/form-data parameters in a Servlet and attach the path to the request using request.setAttribute() method. With web.xml, configure this filter on restlet servlet
In the process method of camel route, type cast the
exchange.getIn().getBody() in HttpServletRequest object, extract the
attribute (path) use it to read the file as ByteStreamArray for
further processing
Not the cleanest, but I could achieve the objective.