Camel Processor not setting headers - apache-camel

I am not able to inject/modify headers in a processor using the below spring DSL config. Could you please help in figuring out what I am doing wrong?
<routeContext xmlns="http://camel.apache.org/schema/spring"
id="routes1">
<route id="sdPoll" de:name="Polling"
de:systemName="Polling" streamCache="true">
<from uri="timer://sdPoll?fixedRate=true&period=60000" />
<process ref="refProcessor" />
<to uri="http://dummyhost" />
<to uri="log:DEBUG?showBody=true&showHeaders=true" />
</route>
</routeContext>
<bean id="refProcessor"
class="com.abc.de.RefProcessor" />
Processor class
public class RefProcessor implements Processor {
private final Logger log = Logger.getLogger(RefProcessor.class);
#SuppressWarnings("unchecked")
#Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader("Authorization", "TODO");
exchange.getIn().setHeader("CamelHttpMethod", "POST");
exchange.getIn().setHeader("CamelHttpUri", "http://localhost:8280/api/check");
exchange.getIn().setHeader("Content-Type", "application/json");
exchange.getIn().setHeader("Accept", "application/json");
exchange.getIn().setBody("TODO");
//exchange.getOut().setHeaders(exchange.getIn().getHeaders());
//exchange.getOut().setHeader("Authorization", "TODO");
//exchange.getOut().setBody("TODO");
}
}
Logs-
Message History
RouteId ProcessorId Processor Elapsed (ms)
[sdPoll] [sdPoll] [timer://sdPoll?fixedRate=true&period=60000 ] [ 21176]
[null] [onCompletion1 ] [onCompletion ] [ 106]
[sdPoll] [process7 ] [ref:refProcessor ] [ 21067]
[null] [process3 ] [ref:GenericErrorHandle ] [ 21016]
Exchange[
Id ID-ABC-63143-1516034486954-0-2
ExchangePattern InOnly
Headers {breadcrumbId=ID-ABC-63143-1516034486954-0-1, CamelRedelivered=false, CamelRedeliveryCounter=0, firedTime=Mon Jan 15 11:41:31 EST 2018}
BodyType null
Body [Body is null]
]
Java DSL seem to work though! So what is wrong with my Spring DSL config
static RouteBuilder createRouteBuilder3() {
return new RouteBuilder() {
public void configure() {
from("timer://timer1?period=60000").process(new Processor() {
public void process(Exchange exchange) throws UnsupportedEncodingException {
exchange.getIn().setHeader("CamelHttpMethod", "POST");
exchange.getIn().setHeader("Content-Type", "application/json");
exchange.getIn().setHeader("Accept", "application/json");
exchange.getIn().setHeader("CamelHttpUri",
"http://localhost:8280/api/check");
exchange.getIn().setHeader("Authorization", "TODO");
exchange.getIn().setBody("TODO");
}
}).to("http://dummyhost").to("log:DEBUG?showBody=true&showHeaders=true");
}
};
}
Message History
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [timer://timer1?period=60000 ] [ 86]
[route1 ] [process1 ] [RefProcessorCamel$3$1#258e2e41 ] [ 6]
[route1 ] [to1 ] [http://dummyhost ] [ 76]
Exchange[
Id ID-ABC-63940-1516036107063-0-2
ExchangePattern InOnly
Headers {Accept=application/json, Authorization=TODO, breadcrumbId=ID-ABC-63994-1516036220042-0-1, CamelHttpMethod=POST, CamelHttpUri=http://localhost:8280/api/check, CamelRedelivered=false, CamelRedeliveryCounter=0, Content-Type=application/json, firedTime=Mon Jan 15 12:10:21 EST 2018}
BodyType String
Body TODO
]

Your Processor looks ok.
Just a wild guess, but do you use routeContext in your XML configuration intentionally? If not, can you try to switch to camelContext?
See http://people.apache.org/~dkulp/camel/configuring-camel.html for the difference between routeContext and camelContext

Related

can the camel splitter skip on messages rows of some value like empty or null?

I have camel route on ingress of files recieved, sometimes these files contain multiple can be thousands of empty rows or records. these occur at the end of the files.
help or advice on how to handle this situation.
2/3/20 0:25,12.0837099,22.07255971,51.15338002,52.76662495,52.34712651,51.12155216,45.7655507,49.96555147,54.47205637,50.66135512,54.43864717,54.31627797,112.11765,1305.89126,1318.734411,52.31780487,44.27374363,48.72548294,43.01383257,23.85434055,41.98898447,47.50916052,31.13055873,112.2747269,0.773642045,1.081464888,2.740194779,1.938788885,1.421660186,0.617588546,21.28219363,25.03362771,26.76627344,40.21132809,29.72854555,33.45911109
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
The route goes to a splitter.
<route autoStartup="true" id="core.predix.accept.file.type.route">
<from id="_from3" uri="{{fileEntranceEndpoint}}"/>
<convertBodyTo id="_convertBodyTo1" type="java.lang.String"/>
<split id="_split1" strategyRef="csvAggregationStrategy" streaming="true" stopOnException="true">
<tokenize token="\n"/>
<process id="_process3" ref="toCsvFormat"/>
<!-- passthru only we do not allow embedded commas in numeric data -->
</split>
<log id="_log1" loggingLevel="INFO" message="CSV body: ${body}"/>
<choice id="_choice1">
<when id="_when1">
<simple>${header.CamelFileName} regex '^.*\.(csv|CSV|txt|gpg)$'</simple>
<log id="_log2" message="${file:name} accepted for processing..."/>
<choice id="_choice2">
<when id="_when2">
<simple>${header.CamelFileName} regex '^.*\.(CSV|txt|gpg)$'</simple>
<setHeader headerName="CamelFileName" id="_setHeader1">
<simple>${file:name.noext.single}.csv</simple>
</setHeader>
<log id="_log3" message="${file:name} changed file name."/>
</when>
</choice>
<split id="_split2" streaming="true">
<tokenize prop:group="noOfLines" token="\n"/>
<log id="_log4" message="Split Group Body: ${body}"/>
<to id="_to1" uri="bean:extractHeader"/>
<to id="acceptedFileType" ref="predixConsumer"/>
</split>
<to id="_to2" uri="bean:extractHeader?method=cleanHeader"/>
</when>
<otherwise id="_otherwise1">
<log id="_log5" loggingLevel="INFO" message="${file:name} is an unknown file type, sending to unhandled repo."/>
<to id="_to3" uri="{{unhandledArchive}}"/>
</otherwise>
</choice>
</route>
The simple aggregator
public class CsvAggregationStrategy implements AggregationStrategy {
private Logger log = LoggerFactory.getLogger(CsvAggregationStrategy.class.getName());
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
//Theory
//-------------------------------------------------------------------------------------
// Arrived | oldExchange | newExchange | Description
//-------------------------------------------------------------------------------------
// A | NULL | A | first message arrives for the first group
// B | A | B | second message arrives for the first group
// F | NULL | F | first message arrives for the second group
// C | AB | C | third message arrives for the first group
//---------------------------------------------------------------------------------------
log.debug("Aggregation Strategy :: start");
if ( newExchange.getException() != null ) {
if ( oldExchange == null ) {
return newExchange;
} else {
oldExchange.setException(newExchange.getException());
return oldExchange;
}
}
if ( oldExchange == null ) { //This will set the 1st record with the Header
return newExchange;
}
String newBody = newExchange.getIn().getBody(String.class);
String oldBody = oldExchange.getIn().getBody(String.class);
String body = oldBody + newBody;
oldExchange.getIn().setBody( body );
log.debug("Aggregation Strategy :: finish");
return oldExchange;
} //Exchange process
} //class AggregationStrategy
I thought I would handle the empty rows in the class toCsvFormat
The class ToCsvFormat simply changes the inbound csv delimiter to a comma.
public class ToCsvFormat implements Processor {
private static final Logger LOG = LoggerFactory.getLogger(ToCsvFormat.class);
#Override
public void process(Exchange exchange) throws Exception {
String body = exchange.getIn().getBody(String.class);
body = body.replaceAll("\\t|;",",");
String bodyCheck = body.replaceAll(",","").trim();
LOG.info("BODY CHECK: " + bodyCheck);
if ( bodyCheck.isEmpty() || bodyCheck == null ) {
throw new IllegalArgumentException("Data record is Empty or NULL. Invalid Data!");
} else {
StringBuilder sb = new StringBuilder(body.trim());
LOG.debug("CSV Format Body In: " + sb.toString());
LOG.debug("sb length: " + sb.length());
if ( sb.toString().endsWith(",") ) {
sb.deleteCharAt(sb.lastIndexOf(",", sb.length()));
}
LOG.info("CSV Format Body Out: " + sb.toString());
sb.append(System.lineSeparator());
exchange.getIn().setBody(sb.toString());
}
}
}
*** the problem I'm having is I need the splitter to finish processing until it hits all the empty rows, or skip over or stop the splitter on empty records. but I need what was previously split or processed. Throwing and capture of exception stops the splitter I get nothing. I'm using the splitter stoponexception but like it says, it stops on the exception.
thank you
So you set up stopOnException=true and asked why your route stopped when exception wasn't catched =) ? As workaround forget about throwing exception and validate your body and if it has inappropriate data just set empty body and then sum them in your AggregationStrategy like in pseudo-route below. I haven't used the xml description for a very long time so i hope your will understand this example with Java DSL.
public class ExampleRoute extends RouteBuilder {
AggregationStrategy aggregationStrategy = new AggregationStrategy() {
#Override
public Exchange aggregate(final Exchange oldExchange, final Exchange newExchange) {
log.debug("Aggregation Strategy :: start");
if (oldExchange != null) {
newExchange.getIn().setBody(newExchange.getIn().getBody(String.class) + oldExchange.getIn().getBody(String.class));
}
log.debug("Aggregation Strategy :: finish");
return newExchange;
}
};
#Override
public void configure() throws Exception {
from("{{fileEntranceEndpoint}}")
.convertBodyTo(String.class)
.split(tokenize("\n"), aggregationStrategy).streaming().stopOnException()
.choice()
.when(body().regex(",+\\$"))
.setBody(constant(""))
.otherwise()
.process("toCsvFormat")
;
}
I recommend you use Java DSL. As you can see, many things are easy to use with it.
Thank you C0ld. Appreciate going easy. yeah, I get it. sometimes we do silly things, why another pair of eyes is a wonderful thing. I took your suggestion and it works like a charm. thank you very much for responding.
<split id="_split1"
strategyRef="emptyRecordAggregationStrategy" streaming="true">
<tokenize token="\n"/>
<choice id="_choice5">
<when id="_when5">
<simple>${body} regex '^,+$'</simple>
<setBody id="_setBody1">
<constant/>
</setBody>
</when>
<otherwise>
<process id="_processCSV" ref="toCsvFormat"/>
</otherwise>
</choice>
</split>
public class EmptyRecordAggregationStrategy implements AggregationStrategy {
private Logger log = LoggerFactory.getLogger(EmptyRecordAggregationStrategy.class.getName());
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
if ( newExchange.getException() != null ) {
if ( oldExchange == null ) {
newExchange.getIn().setBody(newExchange.getIn().getBody(String.class) + System.lineSeparator());
return newExchange;
} else {
oldExchange.getIn().setBody(oldExchange.getIn().getBody(String.class) + System.lineSeparator());
return oldExchange;
}
}
if ( oldExchange == null ) {
newExchange.getIn().setBody(newExchange.getIn().getBody(String.class) + System.lineSeparator());
return newExchange;
}
if ( !newExchange.getIn().getBody(String.class).isEmpty() ) {
oldExchange.getIn().setBody(oldExchange.getIn().getBody(String.class) + newExchange.getIn().getBody(String.class) + System.lineSeparator());
}
return oldExchange;
}
}

Apache Camel AMQ - cannot write file to queue - connection reset by peer/client

thanks in advance!
I am trying/need to write file from FTP to AMQ Queue.
Reason - I'm trying to add failover on my routes using Camel JMS AMQ.
I'm new to Apache ActiveMQ JMS. I have 2 AMQ brokers on 2 separate nodes. on 2 other nodes/servers I have my JBoss Fuse Karaf Containers client applications. I am connecting to the broker from clients. AMQ Console, logs, ... etc.. however, I cannot write a file to the queue from FTP or Email routes. I'm guessing I am doing something wrong and hope you can help with this problem. I'm wondering if this can be done as I am attempting to do it?
AMQ Broker nodes config snippits - same on both
I've tried removing any related timeouts. no diff.
activemq.xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1gb">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="90"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://10.141.145.173:61617?connectionTimeout=0&keepAlive=true&useInactivityMonitor=false&wireFormat.maxInactivityDuration=0&enableStatusMonitor=true"/>
</transportConnectors>
node 2
<transportConnectors>
<transportConnector name="openwire" uri="tcp://10.141.128.182:61617?connectionTimeout=0&keepAlive=true&useInactivityMonitor=false&wireFormat.maxInactivityDuration=0&enableStatusMonitor=true"/>
</transportConnectors>
SYSTEM.cfg
# node 1
activemq.port = 61617
#activemq.host = localhost
activemq.host = 10.141.145.173
activemq.url = tcp://${activemq.host}:${activemq.port}
#
# Activemq configuration node 2
#
activemq.port = 61617
#activemq.host = localhost
activemq.host = 10.141.128.182
activemq.url = tcp://${activemq.host}:${activemq.port}
CLIENT using Failover Transport
jmsSourceDestination=Fleet.InboundFile.Queue
clusteredJmsMaximumRedeliveries=5
amq.url=failover://(tcp://10.141.145.173:61617,tcp://10.141.128.182:61617)?initialReconnectDelay=1000&randomize=false&timeout=5000
amq.username=admin
amq.password=admin
<transportConnectors>
<transportConnector name="openwire" uri="tcp://10.141.145.173:61617?connectionTimeout=0&keepAlive=true&useInactivityMonitor=false&wireFormat.maxInactivityDuration=0&enableStatusMonitor=true"/>
</transportConnectors>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://10.141.145.173:61617?connectionTimeout=0&keepAlive=true&useInactivityMonitor=false&wireFormat.maxInactivityDuration=0&enableStatusMonitor=true"/>
</transportConnectors>
Route formation - I have a dynamic route builder where I drop a config in to create routes
#Override
public void process(Exchange exchange) throws Exception {
final String endpointConfigurationStr = exchange.getIn().getBody(String.class);
LOG.info(endpointConfigurationStr);
final String fileName = (String) exchange.getIn().getHeader("CamelFileName");
if ((null != endpointConfigurationStr) && (null != fileName)) {
Properties props = new Properties();
props.load(new StringReader(endpointConfigurationStr));
if (validateProperties(props)) {
final String decoderName = props.getProperty("decoderName");
LOG.info("DECODER NAME: " + decoderName);
final String fileNameNoExtension = fileName.substring(0, fileName.lastIndexOf('.'));
final String routeIdStr = String.format("fleet.inboundFile.%s.%s.Route", fileNameNoExtension, props.getProperty("transport"));
if (props.getProperty("action").equalsIgnoreCase("activate")) {
ServiceStatus routeStatus = exchange.getContext().getRouteStatus(routeIdStr);
if ((null == routeStatus) || (routeStatus.isStopped())) {
exchange.getContext().addRoutes(new EndpointFileRouteBuilder(routeIdStr,
EndpointDescriptorFactory(props),
props.getProperty("errorArchive"),
props.getProperty("unhandledArchive"),
destinationEndpoint,
decoderName) );
} else {
LOG.info("Route " + routeIdStr + " already started");
}
} else if (props.getProperty("action").equalsIgnoreCase("deactivate")) {
ServiceStatus routeStatus = exchange.getContext().getRouteStatus(routeIdStr);
if (routeStatus.isStarted()) {
exchange.getContext().stopRoute(routeIdStr);
} else {
LOG.debug("Route " + routeIdStr + " already stopped");
}
} else {
LOG.error("Invalid Action in File Properties");
}
} else {
LOG.error("Invalid Properties File ");
}
} else {
LOG.error("File Configuration File or File Name is null");
}
}
Routes FROM STR AND TO STR
if (validateConfiguration()) {
switch (transport.toLowerCase()) {
case "ftp":
fromStr = String.format("%s://%s#%s:%s/%s?password=RAW(%s)&recursive=%s&stepwise=%s&useList=%s&passiveMode=%s&disconnect=%s"
+ "&move=.processed"
+ "&maxMessagesPerPoll=1"
+ "&eagerMaxMessagesPerPoll=false"
+ "&sortBy=file:modified"
+ "&sendEmptyMessageWhenIdle=false"
+ "&delay=60000"
+ "&initialDelay=60000"
+ "&connectTimeout=15000"
+ "&localWorkDirectory=/tmp"
+ "&readLockMinLength=0"
, transport, username, host, port, path, password, recursive, stepwise, useList, passiveMode, disconnect);
break;
case "sftp":
fromStr = String.format("%s://%s#%s:%s/%s?password=RAW(%s)&recursive=%s&stepwise=%s&useList=%s&passiveMode=%s&disconnect=%s"
+ "&move=.processed"
+ "&maxMessagesPerPoll=1"
+ "&eagerMaxMessagesPerPoll=false"
+ "&sortBy=file:modified"
+ "&sendEmptyMessageWhenIdle=false"
+ "&delay=60000"
+ "&initialDelay=60000"
+ "&connectTimeout=15000"
+ "&localWorkDirectory=/tmp"
+ "&readLockMinLength=0"
, transport, username, host, port, path, password, recursive, stepwise, useList, passiveMode, disconnect);
break;
case "file":
fromStr = String.format("%s://%s/?recursive=%s"
+ "&move=.processed"
+ "&readLock=changed"
+ "&maxMessagesPerPoll=1"
+ "&sortBy=file:modified"
+ "&delay=60000"
+ "&initialDelay=60000"
+ "&renameUsingCopy=true"
,transport, path, recursive);
break;
default:
LOG.info("Unsupported transport, cannot establish from or source endpoint!");
throw new UnsupportedTransportException("Unsupported transport, cannot establish from or source endpoint!");
}
// Format the To Endpoint from Parameter(s).
final String toStr = String.format("%s", toEndpoint);
LOG.info("*** toStr - toEndpoint: " + toStr);
ROUTE CREATION
if (Boolean.parseBoolean(isEncryptedWithCompression)) {
//Compression and Encryption
PGPDataFormat pgpVerifyAndDecrypt = new PGPDataFormat();
pgpVerifyAndDecrypt.setKeyFileName("keys/secring.gpg");
pgpVerifyAndDecrypt.setKeyUserid(pgpKeyUserId);
pgpVerifyAndDecrypt.setPassword(pgpPassword);
pgpVerifyAndDecrypt.setArmored(Boolean.parseBoolean(pgpArmored));
pgpVerifyAndDecrypt.setSignatureKeyFileName("keys/pubring.gpg");
pgpVerifyAndDecrypt.setSignatureKeyUserid(pgpKeyUserId);
pgpVerifyAndDecrypt.setSignatureVerificationOption(PGPKeyAccessDataFormat.SIGNATURE_VERIFICATION_OPTION_IGNORE);
from(fromStr)
// .routeId("Compression.with.Encryption")
.routeId(routeId)
.log("Message received ${file:name} for Compression and Encryption " + " from host " + host)
.unmarshal(pgpVerifyAndDecrypt).split(new ZipSplitter())
.streaming().convertBodyTo(String.class)
// .wireTap("file:" + fileArchive)
.split(body()).streaming()
.process(new EndpointParametersProcessor(decoderName))
.to(toStr);
} else if (Boolean.parseBoolean(isEncryptedOnly)) {
//Encryption Only
PGPDataFormat pgpVerifyAndDecrypt = new PGPDataFormat();
pgpVerifyAndDecrypt.setKeyFileName("keys/secring.gpg");
pgpVerifyAndDecrypt.setKeyUserid(pgpKeyUserId);
pgpVerifyAndDecrypt.setPassword(pgpPassword);
pgpVerifyAndDecrypt.setArmored(Boolean.parseBoolean(pgpArmored));
pgpVerifyAndDecrypt.setSignatureKeyFileName("keys/pubring.gpg");
pgpVerifyAndDecrypt.setSignatureKeyUserid(pgpKeyUserId);
pgpVerifyAndDecrypt.setSignatureVerificationOption(PGPKeyAccessDataFormat.SIGNATURE_VERIFICATION_OPTION_IGNORE);
from(fromStr)
// .routeId("Encryption.Only")
.routeId(routeId)
.log("Message received ${file:name} for Encryption Only " + " from host " + host)
.unmarshal(pgpVerifyAndDecrypt)
.convertBodyTo(String.class)
.choice()
.when(simple("${header.CamelFileName} ends with 'gpg'"))
.setHeader("CamelFileName", simple("${file:name.noext.single}"))
// .wireTap("file:" + fileArchive)
.split(body()).streaming()
.process(new EndpointParametersProcessor(decoderName))
.to(toStr);
} else if (Boolean.parseBoolean(isCompressedOnly)) { //Only Zipped or Compressed
ZipFileDataFormat zipFile = new ZipFileDataFormat();
zipFile.setUsingIterator(true);
from(fromStr)
.routeId(routeId)
// .routeId("Zipped.Only")
.log(LoggingLevel.INFO, "Message received ${file:name} for Only Zipped or Compressed files from host " + host)
.unmarshal(zipFile)
.split(body(Iterator.class))
.streaming()
.convertBodyTo(String.class)
// .wireTap("file:" + fileArchive)
.split(body().tokenize("\n"), new FleetAggregationStrategy()).streaming()
.process(new EndpointParametersProcessor(decoderName))
.end()
.choice()
.when(simple("${body.length} > '0'" ))
.to(toStr)
.end();
} else {
//No Compression No Encryption Basic plain data file
from(fromStr)
.routeId(routeId)
.log(LoggingLevel.INFO, "Message Received for No Compression No Encryption Basic plain data file " + " from " + host)
// .wireTap("file:" + fileArchive)
.split(body()).streaming()
.process(new EndpointParametersProcessor(decoderName))
.to(toStr);
}
TOSTR dynamic property
destinationEndpoint=activemq:queue:Fleet.InboundFile.Queue
EFFECTIVE ROUTE
<route xmlns="http://camel.apache.org/schema/spring" customId="true" id="fleet.inboundFile.gcms-new-activate.sftp.Route">
<from uri="sftp://500100471#gemft.corporate.ge.com:10022/fromvan/gary?password=RAW(+W93j2Wa)&recursive=false&stepwise=false&useList=true&passiveMode=true&disconnect=false&move=.processed&maxMessagesPerPoll=1&eagerMaxMessagesPerPoll=false&sortBy=file:modified&sendEmptyMessageWhenIdle=false&delay=60000&initialDelay=60000&connectTimeout=15000&localWorkDirectory=/tmp&readLockMinLength=0"/>
<onException id="onException11">
<to id="to14" uri="file:/GlobalScapeSftpRepo/Fleet/ge-digital/fleet/core/error"/>
</onException>
<onException id="onException13">
<to id="to16" uri="file:///GlobalScapeSftpRepo/Fleet/ge-digital/fleet/core/unhandled"/>
</onException>
<log id="log15" loggingLevel="INFO" message="Message received ${file:name} for Only Zipped or Compressed files from host gemft.corporate.ge.com"/>
<unmarshal id="unmarshal1">
<gzip xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="dataFormat"/>
</unmarshal>
<split id="split1" streaming="true">
<simple>${bodyAs(java.util.Iterator)}</simple>
<convertBodyTo id="convertBodyTo1" type="java.lang.String"/>
<split id="split2" streaming="true">
<expressionDefinition>tokenize(simple{${body}},
)</expressionDefinition>
<process id="process1"/>
</split>
<choice id="choice1">
<when id="when1">
<simple>${body.length} > '0'</simple>
<to id="to18" uri="activemq:queue:Fleet.InboundFile.Queue"/>
</when>
</choice>
</split>
</route>
WHAT I GET on Broker node
11:13:15,119 | WARN | d]-nio2-thread-2 | ServerSession | 156 - org.apache.sshd.core - 0.14.0.redhat-001 | Exception caught
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)[:1.7.0_241]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)[:1.7.0_241]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)[:1.7.0_241]
at sun.nio.ch.IOUtil.read(IOUtil.java:197)[:1.7.0_241]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishRead(UnixAsynchronousSocketChannelImpl.java:387)[:1.7.0_241]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:191)[:1.7.0_241]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)[:1.7.0_241]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)[:1.7.0_241]
at java.lang.Thread.run(Thread.java:748)[:1.7.0_241]
It appears the client is closing the connection without transfering the file? can this be done this way or at all. files will be from 500K to 1 GB.
the system hangs after xfr attempt and must be recycled.

Self-hosted Nancy instance returning 404 errors

I'm trying to get a self-hosted Nancy app running, but I'm having trouble getting it to return valid responses. I'm new at Nancy; I expect my problem is something fairly simple.
Here's some code:
class Program
{
static void Main(string[] args)
{
const String PORT_SETTING = "webServicePortNumber";
const String URI = "http://localhost:{0}/download/";
var portNum = ConfigurationManager.AppSettings[PORT_SETTING];
var uri = new Uri(String.Format(URI, portNum));
var config = new HostConfiguration {
UrlReservations = new UrlReservations { CreateAutomatically = true }
};
using (var nancyHost = new NancyHost(new Bootstrapper(), config, uri)) {
nancyHost.Start();
Console.WriteLine(String.Format("Listening on {0}. Press any key to stop.", uri.AbsoluteUri));
Console.ReadKey();
}
Console.WriteLine("Stopped. Press any key to exit.");
Console.ReadKey();
}
}
internal class Bootstrapper : DefaultNancyBootstrapper
{
protected override Nancy.Diagnostics.DiagnosticsConfiguration DiagnosticsConfiguration
{
get {
return new DiagnosticsConfiguration {
Password = #"[password]"
};
}
}
}
My NancyModule looks like this:
public class DownloadsModule : NancyModule
{
public DownloadsModule() : base("/download")
{
RegisterRoutes();
}
private void RegisterRoutes()
{
Put["/"] = parms => InitiateDownload(parms);
Get["/"] = parms => Summary(parms);
Get["/{id}"] = parms => GetStatus(parms.requestId);
}
private Response GetStatus(Guid requestId)
{
return Response.AsText("TEST: GetStatus requestId " + requestId);
}
private Response Summary(dynamic parms)
{
return Response.AsText("Summary: You loved me before, do you love me now?");
}
private Response InitiateDownload(dynamic parms)
{
return Response.AsText("InitiateDownload.");
}
}
Nancy is running; I can access the diagnostics at http://127.0.0.1:8880/download/_Nancy/. Looking at them, the routes appear ready. Interactive Diagnostics/GetAllRoutes shows:
P U T
name: [nothing] path: /download
G E T
name: [nothing] path: /download
name: [nothing] path: /download/{id}
And yet, I'm getting 404s back when I try http://localhost:8880/download/.
The request trace on the diagnostics page shows:
Method: GET
Request Url:
Scheme: http
Host Name: localhost
Port: 8880
Base Path: /download
Path: /
Query:
Site Base: http://localhost:8880
Is Secure: false
Request Content Type:
Response Content Type: text/html
Request Headers:
<snip>
Accept: text/html;q=1
application/xhtml+xml;q=1
image/webp;q=1
application/xml;q=0.9
*/*;q=0.8
<snip>
Response Headers:
Status Code: 404
Log: New Request Started
[DefaultResponseNegotiator] Processing as real response
So why isn't Nancy routing this request to the proper route?
Problem pointed out to me by jchannon in the Nancy JabbR room:
The URI specifies http://localhost:{0}/download/, while the module also specifies a base path of /download, so currently its looking for an URL of http://localhost:{0}/download/download/

Is there any REST service available in Salesforce to Convert Leads into Accounts?

We have to convert Leads to accounts via REST -OAuth calls. We are able to create, update(Edit) and Detail Lead fields but not able to convert them.
We found the same is possible via SOAP API but we are following REST OAuth only.
Yes and we resolved this by creating an Apex class for REST call. Sample code is this -
#RestResource(urlMapping='/Lead/*')
global with sharing class RestLeadConvert {
#HttpGet
global static String doGet() {
String ret = 'fail';
RestRequest req = RestContext.request;
RestResponse res = RestContext.response;
String leadId = req.requestURI.substring(req.requestURI.lastIndexOf('/')+1);
Database.LeadConvert lc = new Database.LeadConvert();
lc.setLeadId(leadId);
LeadStatus convertStatus = [SELECT Id, MasterLabel FROM LeadStatus WHERE IsConverted=true LIMIT 1];
lc.setConvertedStatus(convertStatus.MasterLabel);
Database.LeadConvertResult lcr ;
try{
lcr = Database.convertLead(lc);
system.debug('*****lcr.isSuccess()'+lcr.isSuccess());
ret = 'ok';
}
catch(exception ex){
system.debug('***NOT CONVERTED**');
}
return ret;
}
}
And you can use this call by
<Your Instance URL>/services/apexrest/Lead/<LeadId>
This test will give you around 93% of coverage.
#isTest
public class RestLeadConvertTest{
static testMethod void testHttpGet() {
Lead l = new Lead();
l.FirstName = 'First';
l.LastName = 'Last';
l.Company = 'Unit Test';
insert l;
Test.startTest();
RestRequest req = new RestRequest();
RestResponse res = new RestResponse();
req.requestURI = '/Lead/' + l.Id;
req.httpMethod = 'GET';
RestContext.request = req;
RestContext.response= res;
RestLeadConvert.doGet();
Test.stopTest();
}
}
You can construct a one-off SOAP request to convert a lead and use the same OAuth token that you already have for the REST API.
The request body should look like:
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ens="urn:sobject.partner.soap.sforce.com" xmlns:fns="urn:fault.partner.soap.sforce.com" xmlns:tns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soap:Header>
<tns:SessionHeader>
<sessionId>YOUR_OAUTH_TOKEN</sessionId>
</tns:SessionHeader>
</soap:Header>
<soap:Body>
<tns:convertLead>
<tns:leadConverts>
<tns:leadId>YOUR_LEAD_ID</tns:leadId>
<tns:convertedStatus>Closed - Converted</tns:convertedStatus>
</tns:leadConverts>
</tns:convertLead>
</soap:Body>
</soap:Envelope>
curl -H 'SOAPAction: null' -H 'Content-Type: text/xml' --data BODY_FROM_ABOVE https://your-instance-url/services/Soap/u/52.0
Note that the SOAPAction header is required, even though Salesforce does not use it.
The result will be returned as XML similar to:
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Header>
<LimitInfoHeader>
<limitInfo>
<current>91</current>
<limit>15000</limit>
<type>API REQUESTS</type>
</limitInfo>
</LimitInfoHeader>
</soapenv:Header>
<soapenv:Body>
<convertLeadResponse>
<result>
<accountId>0015x00002C95kMAAR</accountId>
<contactId>0035x00003NjdeyAAB</contactId>
<leadId>00Q5x00001tHg1tEAC</leadId>
<opportunityId>0065x000025fDsWAAU</opportunityId>
<success>true</success>
</result>
</convertLeadResponse>
</soapenv:Body>
</soapenv:Envelope>
If you are more comfortable with JSON than XML, OneGraph provides a GraphQL API that wraps the convertLead functionality.
It's best to create your own OneGraph app to get a custom app_id, but one is provided here for demonstration purposes.
The GraphQL query will be:
mutation ConvertLead {
salesforce(
auths: {
salesforceOAuth: {
instanceUrl: "YOUR_INSTANCE_URL"
token: "YOUR_OAUTH_TOKEN"
}
}
) {
convertLead(
input: { leadConverts: [{ leadId: "YOUR_LEAD_ID" }] }
) {
leadConverts {
lead {
id
name
}
account {
name
id
}
contact {
name
id
}
opportunity {
name
id
}
success
errors {
message
statusCode
}
}
}
}
}
Then the request will look like:
curl -H 'Content-Type: application/json' 'https://serve.onegraph.com/graphql?app_id=4687c59d-8f9c-494a-ab67-896fd706cee9' --data '{"query": "QUERY_FROM_ABOVE"}'
The result will be returned as JSON that looks like:
{
"data": {
"salesforce": {
"convertLead": {
"leadConverts": [
{
"lead": {
"id": "00Q5x00001tHg1tEAC",
"name": "Daniel Boone"
},
"account": {
"name": "Company",
"id": "0015x00002C95kMAAR"
},
"contact": {
"name": "Daniel Boone",
"id": "0035x00003NjdeyAAB"
},
"opportunity": {
"name": "New Opportunity",
"id": "0065x000025fDsWAAU"
},
"relatedPersonAccountId": null,
"success": true,
"errors": []
}
]
}
}
}
}

Solr / Lucene test query agains doc without indexing

I need to test if certain documents match a query before actually indexing them. How would you do this? One of the possibilities I'm thinking of is running a plain lucene index on memory (ramdisk?) and follow a index -> test query -> delete loop for every new document I have before sending it to the actual Solr server.
Can anyone think of a better solution for this problem?
Thanks a lot.
Update:
Looks like this could be a good starting point: http://www.lucenetutorial.com/lucene-in-5-minutes.html
Since Solr allows transactions / commits you can actually index them and before you do commit do state a delete query which removes all non matching documents.
/**
* #author Omnaest
*/
public class SolrSimpleIndexingTest
{
protected SolrServer solrServer = newSolrServerInstance();
#Test
public void testSolr() throws IOException,
SolrServerException
{
{
SolrInputDocument solrInputDocument = new SolrInputDocument();
{
solrInputDocument.addField( "id", "0" );
solrInputDocument.addField( "text", "test1" );
}
this.solrServer.add( solrInputDocument );
}
{
SolrInputDocument solrInputDocument = new SolrInputDocument();
{
solrInputDocument.addField( "id", "1" );
solrInputDocument.addField( "text", "test2" );
}
this.solrServer.add( solrInputDocument );
}
this.solrServer.deleteByQuery( "text:([* TO *] -test2)" );
this.solrServer.commit();
/*
* Now your index does only contain the document with id=1 !!
*/
QueryResponse queryResponse = this.solrServer.query( new SolrQuery().setQuery( "*:*" ) );
SolrDocumentList solrDocumentList = queryResponse.getResults();
assertEquals( 1, solrDocumentList.size() );
assertEquals( "1", solrDocumentList.get( 0 ).getFieldValue( "id" ) );
}
/**
* #return
*/
private static CommonsHttpSolrServer newSolrServerInstance()
{
try
{
return new CommonsHttpSolrServer( "http://localhost:8983/solr" );
}
catch ( MalformedURLException e )
{
e.printStackTrace();
fail();
}
return null;
}
}

Resources