s
RESTEasy by default adds the GZIP compression and decompression through GZIPDecodingInterceptor and GZIPEncodingInterceptor interceptor. I trying to find out if its possible to disable these interceptors.
E.g. If I post a gzipped MultiPart, on the Server side I see that the file is decompressed. I'd like to keep the file compressed and used it in my integration flows.
As per the documentation, #Provider can be enabled disabled through the config below. But it did not work.
<context-param>
<param-name>resteasy.scan.providers</param-name>
<param-value>false</param-value>
</context-param>
Appreciate answers..
Related
I have installed the Varnish with Apach2 and setup that using the HTTP proxy apache module and used the headers to get the Data over HTTP and send it to HTTPS using reverse proxy.
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:80/
ProxyPassReverse / http://127.0.0.1:80/
RequestHeader set X-Forwarded-Port “443”
RequestHeader set X-Forwarded-Proto “https
But the issue I am facing this setup is the Browser error Content is loading from HTTP over HTTPS has been blocked.
Mixed Content: The page at '' was loaded over HTTPS, but
requested an insecure stylesheet ''. This request has been
blocked; the content must be served over HTTPS.
Please help to understand where I am wrong and how can I make this work?
Thank you in Advance.
There's not a whole lot of context about the setup and the configuration, but based on the information you provided I'm going to assume you're using Apache to first terminate the TLS connection and then forward that traffic to Varnish.
I'm also assuming Apache is also configured as the backend in Varnish listening on a port like 8080 whereas Varnish is on 80 and the HTTPS Apache vhost is on 443.
Vary header
The one thing that might be missing in your setup is a cache variation based on the X-Forwarded-Proto header.
I would advise you to set that cache variation using the following configuration:
Header append Vary: X-Forwarded-Proto
This uses mod_headers and can either be set in your .htaccess file or your vhost configuration.
It should allow Varnish to be aware of the variations based on the Vary: X-Forwarded-Proto header and store a version for HTTP and one for HTTPS.
This will prevent HTTP content being stored when HTTPS content is requested and vice versa.
A good way to simulate the issue
If you want to make sure the issue behaves as I'm expecting it to, please perform a test using the following steps:
Clear your cache through sudo varnishadm ban obj.status "!=" 0
Run varnishlog -g request -q "ReqUrl eq '/'" to filter logs for the. homepage
Call the HTTP version of the homepage and ensure its stored in the cache
Capture the log output for this transaction and store it somewhere
Call that same page over HTTPS and check whether or not the mixed content errors occur
Capture the log output for this transaction and store it somewhere
Then fix the issue through the Vary: X-Forwarded-Proto header and try the testcase again.
In case of problems, just add the 2 log transactions to your question (1 for the miss, 1 for the hit) and I'll examine it for you
I am working on Maven project in Intellij.I have generated wsdl to java using cxf-codegen-plugin. I have created a client and created a tester.java to test the client. I have to log the soap request and response. I have one cxf.xml, config.properties and a client.java files. I am not sure where to configure to log the soap messages. Also i have less idea about webservices. I have also copied log4j.xml to my METAINF.
I have tried all possible scenarios in stack overflow. Not sure which is going wrong.
Assuming you have the latest version of CXF (or fairly recent), the easiest way is to enable the logging feature on the CXF bus in the cxf.xml:
...
<cxf:bus>
<cxf:features>
<cxf:logging/>
</cxf:features>
</cxf:bus>
...
or only on your jaxws endpoint:
<jaxws:endpoint...>
<jaxws:features>
<bean class="org.apache.cxf.feature.LoggingFeature"/>
</jaxws:features>
</jaxws:endpoint>
Make sure you have cxf-rt-features-logging-XXX.jar on your classpath (XXX = your version of CXF).
And configure logging as described here:
http://cxf.apache.org/docs/general-cxf-logging.html
You need to be in INFO level at least.
I am trying to turn off chunking for my java web application. We use apache cxf and are hitting a soap service.
I changed the conduit setting in the config file
<http-conf:conduit
name="*.http-conduit">
<http-conf:client
ConnectionTimeout="2000"
ReceiveTimeout="60000"
MaxRetransmits="2"
ChunkingThreshold="700000"
AllowChunking="false"/>
</http-conf:conduit>
Set it manually
HTTPConduit http = (HTTPConduit)client.getConduit();
HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
httpClientPolicy.setConnectionTimeout(2000);
httpClientPolicy.setReceiveTimeout(60000);
httpClientPolicy.setMaxRetransmits(2);
httpClientPolicy.setAllowChunking(false);
http.setClient(httpClientPolicy);
None of them work and any request above 4k I see transfer-encoding: chunked
Another side question: Is there a different setting for HTTPS and HTTP in cxf?
Here is the scenario:
I'm serving the index document for an app engine backed angularjs application from a GAE service via <welcome-file-list>.
Requests for https://<project>.appspot.com/ serve static/index.html via
<welcome-file-list>
<welcome-file>static/index.html</welcome-file>
<welcome-file>index.html</welcome-file>
</welcome-file-list>
This index.html file contains a list of minimized/uglified/combined static resources with hashes in the name for cache busting purposes. All of the included assets work great, but the index.html file is cached for 600 seconds per the default caching rules on appengine.
I'd like to set no-cache headers for this file but it doesn't seem to honor expiration values configured in appengine-web.xml via <static-files>.
I've tried this
<static-files>
<!- also without leading slash, same result -->
<include path="/static/index.html" expiration="30s" />
According to the documentation {1} and a similar SO question {2}, I suggest you to write something like this: (Check the expiration time pattern)
<static-files>
<!- also without leading slash, same result -->
<include path="/static/index.html" expiration="0d 0h 0m 30s" />
{1}: https://cloud.google.com/appengine/docs/standard/java/config/appref#static_cache_expiration
{2}: unable to set cache expiration on in app.yaml for a python app
I want to know How can I crawl pdf files that are served on internet using Nutch-1.0 using http protocol
I am able to do it on local file systems using file:// protocol but not http protocol
add this property in the nutch-site.xml file then you will crawl the pdf files
<property>
<name>plugin.includes</name>
<value>protocol-httpclient|urlfilter-regex|parse-(html|text|pdf)|index-(basic|anchor)|query-(basic|site|url)|response-(json|xml)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)</value>
<description>protocol-httpclient|urlfilter-regex|parse-(html|text|pdf)|index-(basic|anchor)|query-(basic|site|url)|response-(json|xml)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)</description>
</property>