Optimizing CXF Webservice - cxf

I have a CXF web service which processes requests containing base64 strings. Some of the requests take long time which exceeds our requirements. I want the processing to complete by 3 seconds but most requests are taking 12 seconds. When i trace the processing of the interceptors, the DocLiteralInInterceptor is consuming the most time. As per documentation, this interceptor checks the SOAPAction and binds the message. I am using aegis binding and tried to disable the validation with schema-validation-enabled to false in the configuration. But there is no improvement. Is there any way to optimize the binding process?
Thanks in advance.

Managed to get the timing reduced from 12 sec to 200 ms by in-memory processing instead of file based processing. This was done by overriding the default CXF property of 64 KB for in-memory processing to 1MB as below:
<cxf:properties>
<entry key="bus.io.CachedOutputStream.Threshold" value="1000000"/>
<entry key="bus.io.CachedOutputStream.MaxSize" value="1000000000"/>
</cxf:properties>

Related

Basic scaling instances of GAE don't shutdown even when idle-timeout is far exceeded

I have configured a version of my default service on Google App Engine Standard (Java, though that shouldn't make any difference) to use basic scaling and run a single B2 instance:
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>${app.id}</application>
<version>tasks</version>
<threadsafe>true</threadsafe>
<runtime>java8</runtime>
<module>default</module>
<instance-class>B2</instance-class>
<basic-scaling>
<idle-timeout>60s</idle-timeout>
<max-instances>1</max-instances>
</basic-scaling>
<!-- Other stuff -->
</appengine-web-app>
Despite not getting any request for almost 28 minutes, the instance did not shutdown by itself (I manually shut it down with appcfg.cmd stop_module_version...):
There are no background threads.
Why doesn't this instance shutdown? Two days prior the instance ran almost the whole day, idle, with this configuration... so what's the issue?
The definition of idle is that no new requests are received in x amount of time. What if the last request is taking 20 minutes to execute? Shouldn't idle be defined as the time since the last request finished?
I posted this question on SeverFault (since this is not a programming question) but was told that StackOverflow would be the better site...
GCP Support here:
I tried to reproduce the same behaviour but either using 1m or 60s, instances would be shutdown after serving its last request.
Nonetheless, when I had any long lasting requests, threads, and/or Task queues running for minutes, the instance wouldn't be shutdown until this request was finished. You can also find this information here for both Manual/basic scaling :
Requests can run for up to 24 hours. A manually-scaled instance can
choose to handle /_ah/start and execute a program or script for many
hours without returning an HTTP response code. Task queue tasks can
run up to 24 hours.
In your case, it seems there was a request lasting for minutes prior being finished and thus, the instance was active (rather than idle) throughout until you manually stopped it. You may find this Instance life cycle documentation useful as well.
If you believe this behaviour isn't what you experienced back then, I'd suggest you to create a private issue tracker so we can investigate further. Make sure to provide the project number and all required details in there (fresh samples). Once created, share the issue tracker number so we can look into this.

Timeout Solr search

I'm trying to timeout Solr if it takes too long to run a query.
What I've tried:
According to Solr documentation there is timeAllowed parameter which does exactly what I'm looking for, I've tried to
add <int name="timeAllowed">50</int> to requestHadnler in solrConfig.xml;
add timeAllowed parameter to request URL:
localhost:8080/solr4/alfresco/select?q=*&timeAllowed=50
But it doesn't work (time to load results is more than provided value).
Why I want to do this: we implemented access right validation on Solr side and now for some vague queries like * it takes too much time for Solr to respond.
Probably it could be possible to close such connections from Alfresco or from web container but then such queries will internally run on Solr slowing down the server.
You need to insert timeAllowed in milliseconds. timeAllowed 50 - This means that you need to complete the request in 50 milliseconds.
Try timeAllowed 5000.

Is there any limit for Post Length for SOLR?

im using SOLR but for some reasons im missing some of my docs.
I want to know is there any limitation for HttpPost?
or Solr Docs per each commit?
thank you
The limit of post lies actually more on the server side than on the solr side. If you put solr into tomcat, the it's the limit of tomcat's post which usually interest you, if you put your tomcat behind apache or nginx, then their max post size will also interest you.
as for post itself, http spec doesn't have any limit. Most of the Time it's the server which limits it.

Sencha too slow

I introduced Sencha grid in one of my JSPs. Locally sencha is quite fast but on an external server it is too slow.
I followed deploy instructions here
http://docs.sencha.com/ext-js/4-0/#!/guide/getting_started
using ext-debug.js and my app.js.
Then, in my JSP, I imported app-all.js (670KB) and ext.js
Where am I wrong ?
Thanks
app-all.js is 670KB, which is a very big file. You should refactor, optimize and minify the code so it will be smaller. You could even separate into multiple files per class or implementation and do a dynamic js load (but that would take more time). A good target would be as small as ext.js's.
Also if you have access to your webserver (i.e. Apache/Tomcat), you could turn on gz compression to compress files before sending to browsers. Also look out for other webserver optimizations.
(btw, your question sounds more like a webserver issue rather than a sencha related issue).
Another way to improve the load time of your application is making sure ext.js and app-all.js are cached by the browser. This way the first time your application loads it will be slow, but the following loads will be faster.
Look into the cache-control, expires and other HTTP cache controlling headers (this appears to be a nice explanation). Your server should generate this headers when sending the files you want to be cached.
The real problem, as it appears from the timeline, is the slow connection to the server (10 seconds loading 206/665 KB is slow for most connections), so you should see if there are no other server problems causing the slowness.

What is the best way to Optimise my Apache2/PHP5/MySQL Server for HTTP File Sharing?

I was wondering what optimisations I could make to my server to better it's performance at handling file uploads/downloads.
At the moment I am thinking Apache2 may not be the best HTTP server for this?
Any suggestions or optimisations I could make on my server?
My current set up is an Apache2 HTTP server with PHP dealing with the file uploads which are currently stored in a folder out of the web root and randomly assigned a name which is stored in a MySQL database (along with more file/user information).
When a user wants to download a file, I use the header() function to force the download and readfile() to output the file contents.
You are correct that this is inefficient, but it's not Apache's fault. Serving the files with PHP is going to be your bottleneck. You should look into X-Sendfile, which allows you to tell Apache (via a header inserted by PHP) what file to send (even if it's outside the DocRoot).
The increase in speed will be more pronounced with larger files and heavier loads. Of course an even better way to increase speed is by using a CDN, but that's overkill for most of us.
Using X-Sendfile with Apache/PHP
http://www.jasny.net/articles/how-i-php-x-sendfile/
As for increasing performance with uploads, I have no particular knowledge. In general however, I believe each file upload would "block" one of your Apache workers for a long time, meaning Apache has to spawn more worker processes for other requests. With enough workers spawned, a server can slow noticeably. You may look into Nginx, which is an event-based, rather than process-based, server. This may increase your throughput, but I admit I have never experimented with uploads under Nginx.
Note: Nginx uses the X-Accel-Redirect instead of X-Sendfile.
http://wiki.nginx.org/XSendfile

Resources