Flink 1.6 Async IO - How to increase throughput when enriching a stream, using a REST service call? - apache-flink

I am currently on Flink version 1.6 and am facing an issue with AsyncIO wherein the performance is not up to my expectation.
I am sure I am doing something wrong in my implementation, so any advice/suggestions would be appreciated.
Issue Synopsis -
I am consuming a stream of ids.
For each id, I need to call a REST service.
I've implemented a RichAsyncFunction, which performs the async REST call.
Here's the relevant code method and the asyncInvoke method
// these are initialized in the open method
ExecutorService executorService =
ExecutorService.newFixedThreadPool(n);
CloseableHttpAsyncClient client = ...
Gson gson = ...
public void asyncInvoke(String key, final ResultFuture<Item> resultFuture) throws Exception {
executorService.submit(new Runnable() {
client.execute(new HttpGet(new URI("http://myservice/" + key)), new FutureCallback<HttpResponse>() {
#Override
public void completed(final HttpResponse response) {
System.out.println("completed successfully");
Item item = gson.fromJson(EntityUtils.toString(response.getEntity), Item.class);
resultFuture.complete(Collections.singleton(item));
}
});
});
}
With the above implementation, I've tried :-
Increasing the parallelism of the enrichment operation
Increasing the number of threads in the executor service
Using apache http async client, I've tried tweaking the connection manager settings - setDefaultMaxPerRoute and setMaxTotal.
I am consistently getting a throughput of about 100 requests/sec. The service is able to handle more than 5k per sec.
What am I doing wrong, and how can I improve this ?

Related

Quarkus server-side http-cache

I tried to find out how to config. a server-side rest client (i.e. microservice A calls other microservice B using rest) to used a http cache.
The background is, that the binary entities transfered over the wire can be quite large. Overall performance can benefit from a cache on microservice A side which employs http caching headers and etags provided by microservice B.
I found a solution that seems to work, but I'm not sure it that is a proper solution, that work together with current requests, that can occur on microservice A at any time.
#Inject
/* package private */ ManagedExecutor executor;
//
// Instead of using a declarative rest client we create it ourselves, because we can then supply a server-side cache: See ctor()
//
private ServiceBApi serviceClientB;
#ConfigProperty(name="serviceB.url")
/* package private */ String serviceBUrl;
#ConfigProperty(name="cache-entries")
/* package private */ int cacheEntries;
#ConfigProperty(name="cache-entrysize")
/* package private */ int cacheEntrySize;
#PostConstruct
public void ctor()
{
// Create proxy ourselves, because we can then supply a server-side cache
final CacheConfig cc = CacheConfig.custom()
.setMaxCacheEntries(cacheEntries)
.setMaxObjectSize(cacheEntrySize)
.build();
final CloseableHttpClient httpClient = CachingHttpClientBuilder.create()
.setCacheConfig(cc)
.build();
final ResteasyClient client = new ResteasyClientBuilderImpl()
.httpEngine(new ApacheHttpClient43Engine(httpClient))
.executorService(executor)
.build();
final ResteasyWebTarget target = (ResteasyWebTarget) client.target(serviceBUrl);
this.serviceClientB = target.proxy(ServiceBApi.class);
}
#Override
public byte[] getDoc(final String id)
{
try (final Response response = serviceClientB.getDoc(id)) {
[...]
// Use normally and no need to handle conditional gets and caching headers and other HTTP protocol stuff here, because this does underlying impl.
[...]
}
}
My questions are:
Is my solution ok as server-side solution, i.e. can it handle concurrent requests?
Is there a declarative (quarkus) way (#RegisterRestClient. etc) to achieve the same?
--
Edit
To make things clear: I want service B to be able to control the caching based on the HTTP get request and the specific resource. Additionally I want to avoid the unnecessary transmission of the large documents service B provides.
--
Mik
Assuming that you have worked with the declarative way of using Quarkus' REST Client before, you would just inject the client in your serviceB-consuming class. The method, that will invoke Service B, should be annotated with #CacheResult. This will cache results depending on the incoming id. See also Quarkus Cache Guide.
Please note: As Quarkus and Vert.x are all about non-blocking operations, you should use the async support of the REST Client.
#Inject
#RestClient
ServiceBApi serviceB;
...
#Override
#CacheResult(cacheName = "service-b-cache")
public Uni<byte[]> getDoc(final String id) {
return serviceB.getDoc(id).map(...);
}
...

Side input of size around 50Mb causing long GC pause

We are running Beam application on Flink cluster with side inputs of size 50Mb.
Side input refresh ( Pull from external data source ) based on the notification sent to the notification topic in Kafka.
As the application progress due to side input Full GC happening often and each GC taking ~30 sec which pauses task manager to send heart beat to the Master.
After consecutive heartbeat miss , master assuming worker is dead and start reassigning the jobs , results restarting of application.
We tried removing Side input , application works fine.
Questions :
Is there any limitation on size of side input in Apache Beam side input ?
I have created side input map using asSingleton() , is going to create seprate copy for each task ? I have given 15 parallelism. is it going to create 15 copy in a JVM ( assuming all tasks assigned to same worker )?
What is alternative for side inputs?
This is sample pipeline :
public class BeamApplication {
public static final CloseableHttpClient httpClient = HttpClients.createDefault();
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.create();
options.as(FlinkPipelineOptions.class).setRunner(FlinkRunner.class);
Pipeline pipeline = Pipeline.create(options);
PCollection<Map<String, Double>> sideInput = pipeline
.apply(KafkaIO.<String, String>read().withBootstrapServers("localhost:9092")
.withKeyDeserializer(StringDeserializer.class).withValueDeserializer(StringDeserializer.class)
.withTopic("testing"))
.apply(ParDo.of(new DoFn<KafkaRecord<String, String>, Map<String, Double>>() {
#ProcessElement
public void processElement(ProcessContext processContext) {
KafkaRecord<String, String> record = processContext.element();
String message = record.getKV().getValue().split("##")[0];
String change = record.getKV().getValue().split("##")[1];
if (message.equals("START_REST")) {
Map<String, Double> map = new HashMap<>();
Map<String,Double> changeMap = new HashMap<>();
HttpGet request = new HttpGet("http://localhost:8080/config-service/currency");
try (CloseableHttpResponse response = httpClient.execute(request)) {
HttpEntity entity = response.getEntity();
String responseString = EntityUtils.toString(entity, "UTF-8");
ObjectMapper objectMapper = new ObjectMapper();
CurrencyDTO jsonObject = objectMapper.readValue(responseString, CurrencyDTO.class);
map.putAll(jsonObject.getQuotes());
System.out.println(change);
Random rand = new Random();
Double db = rand.nextDouble();
System.out.println(db);
changeMap.put(change,db);
entity.getContent();
} catch (Exception e) {
e.printStackTrace();
}
processContext.output(changeMap);
}
}
}));
PCollection<Map<String, Double>> currency = sideInput
.apply(Window.<Map<String, Double>>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterPane.elementCountAtLeast(1)))
.withAllowedLateness(Duration.ZERO).discardingFiredPanes());
PCollectionView<Map<String, Double>> sideInputView = currency.apply(View.asSingleton());
PCollection<KafkaRecord<Long, String>> kafkaEvents = pipeline
.apply(KafkaIO.<Long, String>read().withBootstrapServers("localhost:9092")
.withKeyDeserializer(LongDeserializer.class).withValueDeserializer(StringDeserializer.class)
.withTopic("event_testing"));
PCollection<String> output = kafkaEvents
.apply("Extract lines", ParDo.of(new DoFn<KafkaRecord<Long, String>, String>() {
#ProcessElement
public void processElement(ProcessContext processContext) {
String element = processContext.element().getKV().getValue();
Map<String, Double> map = processContext.sideInput(sideInputView);
System.out.println("This is it : " + map.entrySet());
}
}).withSideInputs(sideInputView));
pipeline.run().waitUntilFinish();
}
}
What state-backend are you using?
If i'm not mistaken, side inputs are implemented as state in Flink. If you're using MemoryStateBackend as state-backend, you might indeed reach pressure on you memory consumption.
Also, the processing of events will block until that side input is ready, buffering events. If preparing the side input take long time or the rate of incoming events is high, you might reach memory pressure.
Can try an alternative state-backend? Preferably RocksDBStateBackend, it holds in-flight data in a RocksDB database instead of in-memory.
It's difficult to guess what's the issue. I would recommend monitoring memory related metrics - see a good post on that here.
You could also run profiling on the Task Managers and analyse the dumps - see here
Is the memory increasing also if you only publish the first message to "testing" topic?
Maybe to isolate the problem I would use a simpler side-input. Remove the HTTP call and make the data static. Maybe a periodic triggered one instead of Kafka:
GenerateSequence.from(0).withRate(1, Duration.standardSeconds(5L))

Pass byte array from WPF to WebApi

tl;dr What is the best way to pass binary data (up to 1MBish) from a WPF application to a WebAPI service method?
I'm currently trying to pass binary data from a WPF application to a WebAPI web service, with variable results. Small files (< 100k) generally work fine, but any larger and the odds of success reduce.
A standard OpenFileDialog, and then File.ReadAllBytes pass the byte[] parameter into the client method in WPF. This always succeeds, and I then post the data to WebAPI via a PostAsync call and a ByteArrayContent parameter.
Is this the correct way to do this? I started off with a PostJSONAsync call, and passed the byte[] into that, but thought the ByteArrayContent seemed more appropriate, but neither work reliably.
Client Method in WPF
public static async Task<bool> UploadFirmwareMCU(int productTestId, byte[] mcuFirmware)
{
string url = string.Format("productTest/{0}/mcuFirmware", productTestId);
ByteArrayContent bytesContent = new ByteArrayContent(mcuFirmware);
HttpResponseMessage response = await GetClient().PostAsync(url, bytesContent);
....
}
WebAPI Method
[HttpPost]
[Route("api/productTest/{productTestId}/mcuFirmware")]
public async Task<bool> UploadMcuFirmware(int productTestId)
{
bool result = false;
try
{
Byte[] mcuFirmwareBytes = await Request.Content.ReadAsByteArrayAsync();
....
}
Web Config Settings
AFAIK these limits in web.config should be sufficient to allow 1MB files through to the service?
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
<httpRuntime targetFramework="4.5" maxRequestLength="2097152"/>
I receive errors in WebAPI when calling ReadAsByteArrayAsync(). These vary, possibly due to the app pool in IIS Express having crashed / getting into a bad state, but they include the following (None of which have lead to any promising leads via google):
Specified argument was out of the range of valid values. Parameter name: offset
at System.Web.HttpInputStream.Seek(Int64 offset, SeekOrigin origin)\r\n
at System.Web.HttpInputStream.set_Position(Int64 value)\r\n at System.Web.Http.WebHost.SeekableBufferedRequestStream.SwapToSeekableStream()\r\n at System.Web.Http.WebHost.Seek
OR
Message = "An error occurred while communicating with the remote host. The error code is 0x800703E5."
InnerException = {"Overlapped I/O operation is in progress. (Exception from HRESULT: 0x800703E5)"}
at System.Web.Hosting.IIS7WorkerRequest.RaiseCommunicationError(Int32 result, Boolean throwOnDisconnect)\r\n
at System.Web.Hosting.IIS7WorkerRequest.ReadEntityCoreSync(Byte[] buffer, Int32 offset, Int32 size)\r\n
at System.Web.Hosting.IIS7WorkerRequ...
Initially I thought this was most likely down to IIS Express limitations (running on Windows 7 on my dev pc) but we've had the same issues on a staging server running Server 2012.
Any advice on how I might get this working would be great, or even just a basic example of uploading files to WebAPI from WPF would be great, as most of the code I've found out there relates to uploading files from multipart forms web pages.
Many thanks in advance for any help.
tl;dr It was a separate part of our code in the WebApi service that was causing it to go wrong, duh!
Ah, well, this is embarrassing.
It turns out our problem was down to a Request Logger class we'd registered in WebApiConfig.Register(HttpConfiguration config), and that I'd forgotten about.
It was reading the request content via async as StringContent, and then attempting to log it to the database in an ncarchar(max) field. This itself is probably OK, but I'm guessing all the weird problems started occurring when the LoggingHandler as well as the main WebApi controller, were both trying to access the Request content via async?
Removing the LoggingHandler fixed the problem immediately, and we're now able to upload files of up to 100MB without any problems. To fix it more permanently, I guess I rewrite of the LoggingHandler is required to set a limit on the maximum content size it tries to log / to ignore certain content types.
It's doubtful, but I hope this may be of use for someone one day!
public class LoggingHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
LogRequest(request);
return base.SendAsync(request, cancellationToken).ContinueWith(task =>
{
var response = task.Result;
// ToDo: Decide if/when we need to log responses
// LogResponse(response);
return response;
}, cancellationToken);
}
private void LogRequest(HttpRequestMessage request)
{
(request.Content ?? new StringContent("")).ReadAsStringAsync().ContinueWith(x =>
{
try
{
var callerId = CallerId(request);
var callerName = CallerName(request);
// Log request
LogEntry logEntry = new LogEntry
{
TimeStamp = DateTime.Now,
HttpVerb = request.Method.ToString(),
Uri = request.RequestUri.ToString(),
CorrelationId = request.GetCorrelationId(),
CallerId = callerId,
CallerName = callerName,
Controller = ControllerName(request),
Header = request.Headers.ToString(),
Body = x.Result
};
...........

Timeout waiting for connection from pool - despite single SolrServer

We are having problems with our solrServer client's connection pool running out of connections in no time, even when using a pool of several hundred (we've tried 1024, just for good measure).
From what I've read, the following exception can be caused by not using a singleton HttpSolrServer object. However, see our XML config below, as well:
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:232)
at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
XML Config:
<solr:solr-server id="solrServer" url="http://solr.url.domain/"/>
<solr:repositories base-package="de.ourpackage.data.solr" multicore-support="true"/>
At this point, we are at a loss. We are running a web application on a tomcat7. Whenever a user requests a new website, we send one or more request to the Solr Server, requesting whatever we need, which are usually single entries or page of 20 (using Spring Data).
As for the rest of our implementation, we are using an abstract SolrOperationsrepository class, which is extended by each of our repositories (one repository for each core).
The following is how we set our solrServer. I suspect we are doing something fundamentally wrong here, which is why our connections are overflowing. According to the logs, they are always being returned into the pool, btw.
private SolrOperations solrOperations;
#SuppressWarnings("unchecked")
public final Class<T> getEntityClass() {
return (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
}
public final SolrOperations getSolrOperations() {
/*HttpSolrServer solrServer = (HttpSolrServer)solrOperations.getSolrServer();
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
logger.info("solrOperations: " + solrOperations);
return solrOperations;
}
#Autowired
public final void setSolrServer(SolrServer solrServer) {
try {
String core = SolrServerUtils.resolveSolrCoreName(getEntityClass());
SolrTemplate template = templateHolder.get(core);
/*solrServer.setConnectionTimeout(500);
solrServer.setMaxTotalConnections(2048);
solrServer.setDefaultMaxConnectionsPerHost(2048);
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
if ( template == null ) {
template = new SolrTemplate(new MulticoreSolrServerFactory(solrServer));
template.setSolrCore(core);
template.afterPropertiesSet();
logger.debug("Creating new SolrTemplate for core '" + core + "'");
templateHolder.put(core, template);
}
logger.debug("setting SolrServer " + template);
this.solrOperations = template;
} catch (Exception e) {
logger.error("cannot set solrServer...", e);
}
}
The code that is commented out has been mostly used for testing purposes. I also read somewhere else that you cannot manipulate the solrServer object on-the-fly. Which begs the question, how do I set a timeout/poolsize in the XML config?
The implementation of a repository looks like this:
#Repository(value="stellenanzeigenSolrRepository")
public class StellenanzeigenSolrRepositoryImpl extends SolrOperationsRepository<Stellenanzeige> implements StellenanzeigenSolrRepositoryCustom {
...
public Query createQuery(Criteria criteria, Sort sort, Pageable pageable) {
Query resultQuery = new SimpleQuery(criteria);
if ( pageable != null ) resultQuery.setPageRequest(pageable);
if ( sort != null ) resultQuery.addSort(sort);
return resultQuery;
}
public Page<Stellenanzeige> findBySearchtext(String searchtext, Pageable pageable) {
Criteria searchtextCriteria = createSearchtextCriteria(searchtext);
Query query = createQuery(searchtextCriteria, null, pageable);
return getSolrOperations().queryForPage(query, getEntityClass());
}
...
}
Can any of you point to mistakes that we've made, that could possibly lead to this issue? Like I said, we are at a loss. Thanks in advance, and I will, of course update the question as we make progress or you request more information.
The MulticoreServerFactory always returns an object of HttpClient, that only ever allows 2 concurrent connections to the same host, thus causing the above problem.
This seems to be a bug with spring-data-solr that can be worked around by creating a custom factory and overriding a few methods.
Edit: The clone method in MultiCoreSolrServerFactory is broken. This hasn't been corrected yet. As some of my colleagues have run into this issue recently, I will post a workaround here - create your own class and override one method.
public class CustomMulticoreSolrServerFactory extends MulticoreSolrServerFactory {
public CustomMulticoreSolrServerFactory(final SolrServer solrServer) {
super(solrServer);
}
#Override
protected SolrServer createServerForCore(final SolrServer reference, final String core) {
// There is a bug in the original SolrServerUtils.cloneHttpSolrServer()
// method
// that doesn't clone the ConnectionManager and always returns the
// default
// PoolingClientConnectionManager with a maximum of 2 connections per
// host
if (StringUtils.hasText(core) && reference instanceof HttpSolrServer) {
HttpClient client = ((HttpSolrServer) reference).getHttpClient();
String baseURL = ((HttpSolrServer) reference).getBaseURL();
baseURL = SolrServerUtils.appendCoreToBaseUrl(baseURL, core);
return new HttpSolrServer(baseURL, client);
}
return reference;
}
}

What happens if an application calls more than 10 asynchronous URL Fetch on Google App Engine?

Reading the Google App Engine documentation on asynchronous URL Fetch:
The app can have up to 10 simultaneous
asynchronous URL Fetch calls
What happens if an application calls more than 10 async fetch at a time?
Does Google App Engine raise an exception or simply queue the remain calls waiting to serve them?
Umm, Swizzec is incorrect. Easy enough to test:
rpc = []
for i in range(1,20):
rpc.append(urlfetch.createrpc())
urlfetch.make_fetch_call(rpc[-1],"http://stackoverflow.com/questions/3639855/what-happens-if-i-call-more-than-10-asynchronous-url-fetch")
for r in rpc:
response = r.get_result().status_code
This does not return any exceptions. In fact, this works just fine! Note that your results may vary for non-billable applications.
What Swizec is reporting is a different problem, related to maximum simultaneous connections INTO your application. For billable apps there is no practical limit here btw, it just scales out (subject to the 1000ms rule).
GAE has no way of knowing that your request handler will issue a blocking URL fetch, so the connection 500's he is seeing are not related to what his app is actually doing (that's an oversimplification btw, if your average request response time is > 1000ms your likelyhood of 500's increases).
This is an old question, but I believe the accepted answer is incorrect or outdated and may confuse people. It's been a couple of months that I actually tested this, but in my experience Swizec is quite right that GAE will not queue but rather fail most asynchronous URL fetches exceeding the limit of around 10 simultaneous ones per request.
See https://developers.google.com/appengine/docs/python/urlfetch/#Python_Making_requests and https://groups.google.com/forum/#!topic/google-appengine/EoYTmnDvg8U for a description of the limit.
David Underhill has come up with a URL Fetch Manager for Python, which queues asynchronous URL fetches that exceed the limit in application code.
I have implemented something similar for Java, which synchronously blocks (due to the lack of a callback function or ListenableFutures) additional requests:
/**
* A URLFetchService wrapper that ensures that only 10 simultaneous asynchronous fetch requests are scheduled. If the
* limit is reached, the fetchAsync operations will block until another request completes.
*/
public class BlockingURLFetchService implements URLFetchService {
private final static int MAX_SIMULTANEOUS_ASYNC_REQUESTS = 10;
private final URLFetchService urlFetchService = URLFetchServiceFactory.getURLFetchService();
private final Queue<Future<HTTPResponse>> activeFetches = new LinkedList<>();
#Override
public HTTPResponse fetch(URL url) throws IOException {
return urlFetchService.fetch(url);
}
#Override
public HTTPResponse fetch(HTTPRequest request) throws IOException {
return urlFetchService.fetch(request);
}
#Override
public Future<HTTPResponse> fetchAsync(URL url) {
block();
Future<HTTPResponse> future = urlFetchService.fetchAsync(url);
activeFetches.add(future);
return future;
}
#Override
public Future<HTTPResponse> fetchAsync(HTTPRequest request) {
block();
Future<HTTPResponse> future = urlFetchService.fetchAsync(request);
activeFetches.add(future);
return future;
}
private void block() {
while (activeFetches.size() >= MAX_SIMULTANEOUS_ASYNC_REQUESTS) {
// Max. simultaneous async requests reached; wait for one to complete
Iterator<Future<HTTPResponse>> it = activeFetches.iterator();
while (it.hasNext()) {
if (it.next().isDone()) {
it.remove();
break;
}
}
}
}
}
500 errors start happening. Silently.
You only find out about these when you look at your log under all requests (don't get listed as errors). It simply says "The request was aborted because you reached your simultaneous requests limit".
So when you're making lots of asynchronous calls, make sure you can handle some of them spazzing out.
See if this answers your question:
http://groups.google.com/group/google-appengine/browse_thread/thread/1286139a70ef83c5?fwc=1

Resources