Problem Background
I am currently working on a camel based ETL application that processes groups of files as they appear in a dated directory. The files need to be processed together as a group determined by the beginning of the file name. The files can only be processed once the done file (".flag") has been written to the directory. I know the camel file component has a done file option, but that only allows you to retrieve files with the same name as the done file. The application needs to run continuously and start polling the next day's directory when the date rolls.
Example Directory Structure:
/process-directory
/03-09-2011
/03-10-2011
/GROUPNAME_ID1_staticfilename.xml
/GROUPNAME_staticfilename2.xml
/GROUPNAME.flag
/GROUPNAME2_ID1_staticfilename.xml
/GROUPNAME2_staticfilename2.xml
/GROUPNAME2_staticfilename3.xml
/GROUPNAME2.flag
Attempts Thus Far
I have the following route (names obfuscated) that kicks off the processing:
#Override
public void configure() throws Exception
{
getContext().addEndpoint("processShare", createProcessShareEndpoint());
from("processShare")
.process(new InputFileRouter())
.choice()
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE1 + "'")
.to("seda://type1?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE2 + "'")
.to("seda://type2?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE3 + "'")
.to("seda://type3?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE4 + "'")
.to("seda://type4?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE5 + "'")
.to("seda://type5?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE6 + "'")
.to("seda://type6?size=1")
.when()
.simple("${header.processorName} == '" + InputFileType.TYPE7 + "'")
.to("seda://type7?size=1")
.otherwise()
.log(LoggingLevel.FATAL, "Unknown file type encountered during processing! --> ${body}");
}
My problems are around how to configure the file endpoint. I'm currently trying to programatically configure the endpoint without a lot of luck. My experience in camel thus far has been predominently using the Spring DSL and not the Java DSL.
I went down the route of trying to instantiate a FileEndpoint object, but whenever the route builds I get an error saying that the file property is null. I believe this is because I should be creating a FileComponent and not an endpoint. I'm not creating the endpoint without using a uri because I am not able to specify the dynamic date in the directory name using the uri.
private FileEndpoint createProcessShareEndpoint() throws ConfigurationException
{
FileEndpoint endpoint = new FileEndpoint();
//Custom directory "ready to process" implementation.
endpoint.setProcessStrategy(getContext().getRegistry().lookup(
"inputFileProcessStrategy", MyFileInputProcessStrategy.class));
try
{
//Controls the number of files returned per directory poll.
endpoint.setMaxMessagesPerPoll(Integer.parseInt(
PropertiesUtil.getProperty(
AdapterConstants.OUTDIR_MAXFILES, "1")));
}
catch (NumberFormatException e)
{
throw new ConfigurationException(String.format(
"Property %s is required to be an integer.",
AdapterConstants.OUTDIR_MAXFILES), e);
}
Map<String, Object> consumerPropertiesMap = new HashMap<String, Object>();
//Controls the delay between directory polls.
consumerPropertiesMap.put("delay", PropertiesUtil.getProperty(
AdapterConstants.OUTDIR_POLLING_MILLIS));
//Controls which files are included in directory polls.
//Regex that matches file extensions (eg. {SOME_FILE}.flag)
consumerPropertiesMap.put("include", "^.*(." + PropertiesUtil.getProperty(
AdapterConstants.OUTDIR_FLAGFILE_EXTENSION, "flag") + ")");
endpoint.setConsumerProperties(consumerPropertiesMap);
GenericFileConfiguration configuration = new GenericFileConfiguration();
//Controls the directory to be polled by the endpoint.
if(CommandLineOptions.getInstance().getInputDirectory() != null)
{
configuration.setDirectory(CommandLineOptions.getInstance().getInputDirectory());
}
else
{
SimpleDateFormat dateFormat = new SimpleDateFormat(PropertiesUtil.getProperty(AdapterConstants.OUTDIR_DATE_FORMAT, "MM-dd-yyyy"));
configuration.setDirectory(
PropertiesUtil.getProperty(AdapterConstants.OUTDIR_ROOT) + "\\" +
dateFormat.format(new Date()));
}
endpoint.setConfiguration(configuration);
return endpoint;
Questions
Is implementing a GenericFileProcessingStrategy the correct thing to do in this situation? If so, is there an example of this somewhere? I have looked through the camel file unit tests and didn't see anything that jumped out at me.
What am I doing wrong with configuring the endpoint? I feel like the answer to cleaning up this mess is tied in with question 3.
Can you configure the file endpoint to roll dated folders when polling and the date changes?
As always thanks for the help.
You can refer to a custom ProcessStrategy from the endpoint uri using the processStrategy option, eg file:xxxx?processStrategy=#myProcess. Notice how we prefix the value with # to indicate it should lookup it from the registry. So in Spring XML you just add a
<bean id="myProcess" ...> tag
In Java its probably easier to grab the endpoint from the CamelContext API:
FileEndpoint file = context.getEndpoint("file:xxx?aaa=123&bbb=456", FileEndpoint.class);
This allows you to pre configure the endpoint. And of course afterwards you can use the API on FileEndpoint to set other configurations.
In java, this is how to use GenericFileProcessingStrategy :
#Component
public class CustomGenericFileProcessingStrategy<T> extends GenericFileProcessStrategySupport<T> {
public CustomFileReadyToCopyProcessStrategy() {
}
public boolean begin(GenericFileOperations<T> operations, GenericFileEndpoint<T> endpoint, Exchange exchange, GenericFile<T> file) throws Exception {
super.begin(operations, endpoint, exchange, file);
...
}
public void commit(GenericFileOperations<T> operations, GenericFileEndpoint<T> endpoint, Exchange exchange, GenericFile<T> file) throws Exception {
super.commit(operations, endpoint, exchange, file);
...
}
public void rollback(GenericFileOperations<T> operations, GenericFileEndpoint<T> endpoint, Exchange exchange, GenericFile<T> file) throws Exception {
super.rollback(operations, endpoint, exchange, file);
...
}
}
And then create you route Builer class:
public class myRoutes() extends RouteBuilder {
private final static CustomGenericFileProcessingStrategy customGenericFileProcessingStrategy;
public myRoutes(CustomGenericFileProcessingStrategy
customGenericFileProcessingStrategy) {
this.customGenericFileProcessingStrategy = customGenericFileProcessingStrategy ; }
#Override public void configure() throws Exception {
FileEndpoint fileEndPoint= camelContext.getEndpoint("file://mySourceDirectory");
fileEndPoint.setProcessStrategy(myCustomGenericFileProcessingStrategy );
from(fileEndPoint).setBody(...)process(...).toD(...);
...
}
Related
I have a scenario where I have to read a file from the location on certain interval, extract the file name & file path, hit 2 rest services which is a Get & Post call using those inputs & place the file in appropriate location. I have managed a pseudo code as follows.
Wanted to know if there's a better way of achieving this using Camel. Appreciate your help!
The flow is -
Extract the fileName
Hit a Get endpoint ('getAPIDetails') using that fileName as an input to check if that fileName exists in that registry.
If the response is successful (status code 200)
Call a Post endpoint ('registerFile') with fileName & filePath as RequestBody
Move the file to C:/output folder (moving the file is still TODO in the code below).
If the file is not found (status code 404)
Move the file to C:/error folder.
'FileDetails' below is a POJO consisting of fileName & filePath which will be used for passing as a RequestBody to post service call.
#Override
public void configure() throws Exception {
restConfiguration().component("servlet").port("8080")).host("localhost")
.bindingMode(RestBindingMode.json);
from("file:C://input?noop=true&scheduler=quartz2&scheduler.cron=0 0/1 * 1/1 * ? *")
.process(new Processor() {
public void process(Exchange msg) {
String fileName = msg.getIn().getHeader("CamelFileName").toString();
System.out.println("CamelFileName: " + fileName);
FileDetails fileDetails = FileDetails.builder().build();
fileDetails.setFileName(fileName);
fileDetails.setFilePath(exchange.getIn().getBody());
}
})
// Check if this file exists in the registry.
// Question: Will the 'fileName' in URL below be picked from process() method?
.to("rest:get:getAPIDetails/fileName")
.choice()
// If the API returns true, call the post endpoint with fileName & filePath as input params
.when(header(Exchange.HTTP_RESPONSE_CODE).isEqualTo(constant(200)))
// Question: Will 'fileDetails' in URL below be passed as a requestbody with desired values set in process() method?
// TODO: Move the file to C:/output location after Post call
.to("rest:post:registerFile?type=fileDetails")
.otherwise()
.to("file:C://error");
}
Managed to resolve this use case with below approach. Closing the loop. Thank you!
P.S.: There's more to this implementation. Just wanted to put across the approach.
#Override
public void configure() throws Exception {
// Actively listen to the inbound folder for an incoming file
from("file:C://input?noop=true&scheduler=quartz2&scheduler.cron=0 0/1 * 1/1 * ? *"")
.doTry()
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getIn().setHeader("fileName",
exchange.getIn().getHeader("CamelFileName").toString());
}
})
// Call the Get endpoint with fileName as input parameter
.setHeader(Exchange.HTTP_METHOD, simple("GET"))
.log("Consuming the GET service")
.toD("http://localhost:8090/getAPIDetails?fileName=${header.fileName}")
.choice()
// if the API returns true, move the file to the processing folder
.when(header(Exchange.HTTP_RESPONSE_CODE).isEqualTo(constant(200)))
.to("file:C:/output")
.endChoice()
// If the API's response code is other than 200, move the file to error folder
.otherwise()
.log("Moving the file to error folder")
.to("file:C:/error")
.endDoTry()
.doCatch(IOException.class)
.log("Exception handled")
.end();
// Listen to the processing folder for file arrival after it gets moved in the above step
from("file:C:/output")
.doTry()
.process(new FileDetailsProcessor())
.marshal(jsonDataFormat)
.setHeader(Exchange.HTTP_METHOD, simple("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.log("Consuming the POST service")
// Call the Rest endpoint with fileName & filePath as RequestBody which is set in the FileDetailsProcessor class
.to("http://localhost:8090/registerFile")
.process(new MyProcessor())
.endDoTry()
.doCatch(Exception.class)
.log("Exception handled")
.end();
}
I am trying to setup a simple camel route which reads from a sqlite table and prints the record (later it would be written to a file).
The flow I have setup is below
bindToRegistry("sqlConsumer", new SqliteConsumer());
bindToRegistry("sqliteDatasource", dataSource());
from("sql:select * from recordsheet_record_1 where col_1 = 'A5'?dataSource=#sqliteDatasource")
.to("bean:sqlConsumer?method=consume")
.end();
And the SqliteConsmer as below
public class SqliteConsumer {
public void consume(Map<String, Object> data, Exchange exchange) {
System.out.println("Map: '" + data + "'");
//TODO: append to file
}
}
When I execute the route, it should only execute once (prints once), but, it keeps on printing... Am I doing anything wrong here?
I am new to camel framework so any help or guide would be much appreciated.
Thanks.
It is a polling consumer so it polls the source according to the configuration, you can find more info here: https://camel.apache.org/components/latest/eips/polling-consumer.html
I want to test below camel route. All the example which i find online has route starting with file, where as in my case i have a spring bean method which is getting called every few minutes and finally message is transformed and moved to jms as well as audit directory.
I am clue less on write test for this route.
All i have currently in my test case is
Mockito.when(tradeService.searchTransaction()).thenReturn(dataWithSingleTransaction);
from("quartz2://tsTimer?cron=0/20+*+8-18+?+*+MON,TUE,WED,THU,FRI+*")
.bean(TradeService.class)
.marshal()
.jacksonxml(true)
.to("jms:queue:out-test")
.to("file:data/test/audit")
.end();
Testing with Apache Camel and Spring-Boot is really easy.
Just do the following (the example below is an abstract example just to give you a hint how you can do it):
Write a Testclass
Use the Spring-Boot Annotations to configure the test class.
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
#RunWith(SpringRunner.class)
public class MyRouteTest {
#EndpointInject(uri = "{{sourceEndpoint}}")
private ProducerTemplate sourceEndpoint;
....
public void test() {
// send your body to the endpoint. See other provided methods too.
sourceEndpoint.sendBody([your input]);
}
}
In the src/test/application.properties:
Configure your Camel-Endpoints like the source and the target:
sourceEndpoint=direct:myTestSource
Hints:
It's good not to hardwire your start-Endpoint in the route directly when using spring-boot but to use the application.properties. That way it is easier to mock your endpoints for unit tests because you can change to the direct-Component without changing your source code.
This means instead of:
from("quartz2://tsTimer?cron=0/20+*+8-18+?+*+MON,TUE,WED,THU,FRI+*")
you should write:
from("{{sourceEndpoint}}")
and configure the sourceEndpoint in your application.properties:
sourceEndpoint=quartz2://tsTimer?cron=0/20+*+8-18+?+*+MON,TUE,WED,THU,FRI+*
That way you are also able to use your route for different situations.
Documentation
A good documentation about how to test with spring-boot can be found here: https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-testing.html
For Apache Camel: http://camel.apache.org/testing.html
#the hand of NOD Thanks for your hints, i was going into completely wrong direction. After reading your answer i was able to write the basic test and from this i think i can take it forward.
Appreciate your time, however i see that based on my route it should drop an XML file to audit directory which is not happening.
Look like intermediate steps are also getting mocked, without I specifying anything.
InterceptSendToMockEndpointStrategy - Adviced endpoint [xslt://trans.xslt] with mock endpoint [mock:xslt:trans.xslt]
INFO o.a.c.i.InterceptSendToMockEndpointStrategy - Adviced endpoint [file://test/data/audit/?fileName=%24%7Bheader.outFileName%7D] with mock endpoint [mock:file:test/data/audit/]
INFO o.a.camel.spring.SpringCamelContext - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
TradePublisherRoute.java
#Override
public void configure() throws Exception {
logger.info("TradePublisherRoute.configure() : trade-publisher started configuring camel route.");
from("{{trade-publisher.sourceEndpoint}}")
.doTry()
.bean(tradeService)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String dateStr = Constant.dateFormatForFileName.format(new Date());
logger.info("this is getting executed : " + dateStr);
exchange.setProperty(Constant.KEY_INCOMING_XML_FILE_NAME, "REQ-" + dateStr + Constant.AUDIT_FILE_EXTENSION);
exchange.setProperty(Constant.KEY_OUTGOING_XML_FILE_NAME, "RESP-" + dateStr + Constant.AUDIT_FILE_EXTENSION);
}
})
.marshal()
.jacksonxml(true)
.wireTap("{{trade-publisher.requestAuditDir}}" + "${header.inFileName}")
.to("{{trade-publisher.xsltFile}}")
.to("{{trade-publisher.outboundQueue}}")
.to("{{trade-publisher.responseAuditDir}}" + "${header.outFileName}")
.bean(txnService, "markSuccess")
.endDoTry()
.doCatch(Exception.class)
.bean(txnService, "markFailure")
.log(LoggingLevel.ERROR, "EXCEPTION: ${exception.stacktrace}")
.end();
TradePublisherRouteTest.java
#ActiveProfiles("test")
#RunWith(CamelSpringBootRunner.class)
#SpringBootTest(classes = TradePublisherApplication.class)
#MockEndpoints
public class TradePublisherRouteTest {
#EndpointInject(uri = "{{trade-publisher.outboundQueue}}")
private MockEndpoint mockQueue;
#EndpointInject(uri = "{{trade-publisher.sourceEndpoint}}")
private ProducerTemplate producerTemplate;
#MockBean
TradeService tradeService;
private List<Transaction> transactions = new ArrayList<>();
#BeforeClass
public static void beforeClass() {
}
#Before
public void before() throws Exception {
Transaction txn = new Transaction("TEST001", "C001", "100", "JPM", new BigDecimal(100.50), new Date(), new Date(), 1000, "P");
transactions.add(txn);
}
#Test
public void testRouteConfiguration() throws Exception {
Mockito.when(tradeService.searchTransaction()).thenReturn(new Data(transactions));
producerTemplate.sendBody(transactions);
mockQueue.expectedMessageCount(1);
mockQueue.assertIsSatisfied(2000);
}
Please correct me if i am doing something wrong!
We are having problems with our solrServer client's connection pool running out of connections in no time, even when using a pool of several hundred (we've tried 1024, just for good measure).
From what I've read, the following exception can be caused by not using a singleton HttpSolrServer object. However, see our XML config below, as well:
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:232)
at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
XML Config:
<solr:solr-server id="solrServer" url="http://solr.url.domain/"/>
<solr:repositories base-package="de.ourpackage.data.solr" multicore-support="true"/>
At this point, we are at a loss. We are running a web application on a tomcat7. Whenever a user requests a new website, we send one or more request to the Solr Server, requesting whatever we need, which are usually single entries or page of 20 (using Spring Data).
As for the rest of our implementation, we are using an abstract SolrOperationsrepository class, which is extended by each of our repositories (one repository for each core).
The following is how we set our solrServer. I suspect we are doing something fundamentally wrong here, which is why our connections are overflowing. According to the logs, they are always being returned into the pool, btw.
private SolrOperations solrOperations;
#SuppressWarnings("unchecked")
public final Class<T> getEntityClass() {
return (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
}
public final SolrOperations getSolrOperations() {
/*HttpSolrServer solrServer = (HttpSolrServer)solrOperations.getSolrServer();
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
logger.info("solrOperations: " + solrOperations);
return solrOperations;
}
#Autowired
public final void setSolrServer(SolrServer solrServer) {
try {
String core = SolrServerUtils.resolveSolrCoreName(getEntityClass());
SolrTemplate template = templateHolder.get(core);
/*solrServer.setConnectionTimeout(500);
solrServer.setMaxTotalConnections(2048);
solrServer.setDefaultMaxConnectionsPerHost(2048);
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
if ( template == null ) {
template = new SolrTemplate(new MulticoreSolrServerFactory(solrServer));
template.setSolrCore(core);
template.afterPropertiesSet();
logger.debug("Creating new SolrTemplate for core '" + core + "'");
templateHolder.put(core, template);
}
logger.debug("setting SolrServer " + template);
this.solrOperations = template;
} catch (Exception e) {
logger.error("cannot set solrServer...", e);
}
}
The code that is commented out has been mostly used for testing purposes. I also read somewhere else that you cannot manipulate the solrServer object on-the-fly. Which begs the question, how do I set a timeout/poolsize in the XML config?
The implementation of a repository looks like this:
#Repository(value="stellenanzeigenSolrRepository")
public class StellenanzeigenSolrRepositoryImpl extends SolrOperationsRepository<Stellenanzeige> implements StellenanzeigenSolrRepositoryCustom {
...
public Query createQuery(Criteria criteria, Sort sort, Pageable pageable) {
Query resultQuery = new SimpleQuery(criteria);
if ( pageable != null ) resultQuery.setPageRequest(pageable);
if ( sort != null ) resultQuery.addSort(sort);
return resultQuery;
}
public Page<Stellenanzeige> findBySearchtext(String searchtext, Pageable pageable) {
Criteria searchtextCriteria = createSearchtextCriteria(searchtext);
Query query = createQuery(searchtextCriteria, null, pageable);
return getSolrOperations().queryForPage(query, getEntityClass());
}
...
}
Can any of you point to mistakes that we've made, that could possibly lead to this issue? Like I said, we are at a loss. Thanks in advance, and I will, of course update the question as we make progress or you request more information.
The MulticoreServerFactory always returns an object of HttpClient, that only ever allows 2 concurrent connections to the same host, thus causing the above problem.
This seems to be a bug with spring-data-solr that can be worked around by creating a custom factory and overriding a few methods.
Edit: The clone method in MultiCoreSolrServerFactory is broken. This hasn't been corrected yet. As some of my colleagues have run into this issue recently, I will post a workaround here - create your own class and override one method.
public class CustomMulticoreSolrServerFactory extends MulticoreSolrServerFactory {
public CustomMulticoreSolrServerFactory(final SolrServer solrServer) {
super(solrServer);
}
#Override
protected SolrServer createServerForCore(final SolrServer reference, final String core) {
// There is a bug in the original SolrServerUtils.cloneHttpSolrServer()
// method
// that doesn't clone the ConnectionManager and always returns the
// default
// PoolingClientConnectionManager with a maximum of 2 connections per
// host
if (StringUtils.hasText(core) && reference instanceof HttpSolrServer) {
HttpClient client = ((HttpSolrServer) reference).getHttpClient();
String baseURL = ((HttpSolrServer) reference).getBaseURL();
baseURL = SolrServerUtils.appendCoreToBaseUrl(baseURL, core);
return new HttpSolrServer(baseURL, client);
}
return reference;
}
}
I am using Apache HttpComponents in a bean inside of Camel to try to write a job to download Apple's metadata database files. This is a list of every song in iTunes. So, obviously it is big. 3.5+ GB. I am trying to use Apache HttpComponents to make an asynchronous get request. However, it seems that the size of the file being returned is too large.
try {
httpclient.start();
FileOutputStream fileOutputStream = new FileOutputStream(download);
//Grab the archive.
URIBuilder uriBuilder = new URIBuilder();
uriBuilder.setScheme("https");
uriBuilder.setHost("feeds.itunes.apple.com");
uriBuilder.setPath("/feeds/epf-flat/v1/full/usa/" + iTunesDate + "/song-usa-" + iTunesDate + ".tbz");
String endpoint = uriBuilder.build().toURL().toString();
HttpGet getCall = new HttpGet(endpoint);
String creds64 = new String(Base64.encodeBase64((user + ":" + password).getBytes()));
log.debug("Auth: " + "Basic " + creds64);
getCall.setHeader("Authorization", "Basic " + creds64);
log.debug("About to download file from Apple: " + endpoint);
Future<HttpResponse> future = httpclient.execute(getCall, null);
HttpResponse response = future.get();
fileOutputStream.write(EntityUtils.toByteArray(response.getEntity()));
fileOutputStream.close();
Every time it return this:
java.util.concurrent.ExecutionException: org.apache.http.ContentTooLongException: Entity content is too long: 3776283429
at org.apache.http.concurrent.BasicFuture.getResult(BasicFuture.java:68)
at org.apache.http.concurrent.BasicFuture.get(BasicFuture.java:77)
at com.decibly.hive.songs.iTunesWrapper.getSongData(iTunesWrapper.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.camel.component.bean.MethodInfo.invoke(MethodInfo.java:407)
So, the size of the file in bytes is to big for a Java integer, which HttpComponents is using to track the response size. I get that, wondering if there are any workarounds aside from dropping back a layer and calling the Java Net libraries directly.
Use HttpAsyncClient
that is build on the top of http components and supports for Zero-Copy transfer.
See an example here: https://hc.apache.org/httpcomponents-asyncclient-4.1.x/httpasyncclient/examples/org/apache/http/examples/nio/client/ZeroCopyHttpExchange.java
Or simply, in your case
CloseableHttpAsyncClient httpclient = HttpAsyncClientBuilder.create()....
ZeroCopyConsumer<File> consumer = new ZeroCopyConsumer<File>(new File(download)) {
#Override
protected File process(
final HttpResponse response,
final File file,
final ContentType contentType) throws Exception {
if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {
throw new ClientProtocolException("Connection to host failed: " + response.getStatusLine());
}
return file;
}
};
httpclient.execute(HttpAsyncMethods.createGet(endpoint), consumer, null, null).get();
The body of the response is directly saved to a file. The only limitation is given by the file system