Good day, I'm working on a Servlet that must return a PDF file and the message log for the processing done with that file.
So far I'm passing a boolean which I evaluate and return either the log or the file, depending on the user selection, as follows:
//If user Checked the Download PDF
if (isDownload) {
byte[] oContent = lel;
response.setContentType("application/pdf");
response.addHeader("Content-disposition", "attachment;filename=test.pdf");
out = response.getOutputStream();
out.write(oContent);
} //If user Unchecked Download PDF and only wants to see logs
else {
System.out.println("idCompany: "+company);
System.out.println("code: "+code);
System.out.println("date: "+dateValid);
System.out.println("account: "+acct);
System.out.println("documentType: "+type);
String result = readFile("/home/gianksp/Desktop/Documentos/Logs/log.txt");
System.setOut(System.out);
// Get the printwriter object from response to write the required json object to the output stream
PrintWriter outl = response.getWriter();
// Assuming your json object is **jsonObject**, perform the following, it will return your json object
outl.print(result);
outl.flush();
}
Is there an efficient way to return both items at the same time?
Thank you very much
HTTP protocol doesn't allow you to send more than one HTTP response per one HTTP request. With this restriction in mind you can think of the following alternatives:
Let client fire two HTTP requests, for example by specifyingonclick event handler, or, if you returned HTML page in the first response, you could fire another request on window.load or page.ready;
Provide your for an opportunity of choosing what he'd like to download and act in a servlet accordingly: if he chose PDF - return PDF; if he chose text - return text and if he chose both - pack them in an archive and return it.
Note that the first variant is both clumsy and not user friendly and as far as I'm concerned should be avoided at all costs. A page where user controls what he gets is a much better alternative.
You could wrap them in a DTO object or place them in the session to reference from a JSP.
Related
I have an IdP and an SP setup using the ITfoxtec SAML2 libraries, and everything works great when not using artifact binding, or when not validating signatures. When using artifact binding and validating signatures I'm getting a "Signature is invalid." exception in the ACS when trying to retrieve and bind the actual response/assertion.
It seems to unbind the artifact response fine, then when it goes to retrieve and unbind the artifact from the ArtifactResolutionService it fails, specifically on the last line of this block:
var soapEnvelope = new Saml2SoapEnvelope();
saml2AuthnResponse = new Saml2AuthnResponse(config);
await soapEnvelope.ResolveAsync(httpClient, saml2ArtifactResolve, saml2AuthnResponse);
I've checked that my signature validation certificate is correct and I've dug through the source code but am scratching my head. I've tried to validate the "saml2p:ArtifactResponse" myself but there isn't much out there.
If I put this line before the chunk above everything works as expected as it no longer validates the signature:
config.SignatureValidationCertificates.Clear();
One thing I noticed is that in the 'saml2p:ArtifactResponse' there is a signature inside of that node but not inside the contained 'saml2p:Response' node. Is it possible that the saml2p:Response is being isolated and then a signature check is being performed? I tried to see if it was supposed to be signing the response/assertion in the artifact cache on the IdP side (artifactSaml2AuthnResponseCache), but it doesn't sign response at all. I'm doing this before putting it in the cache just like in the example and just like I do when using POST binding:
var token = saml2AuthnResponse.CreateSecurityToken(relyingParty.Issuer, subjectConfirmationLifetime: 5, issuedTokenLifetime: 60);
artifactSaml2AuthnResponseCache[saml2ArtifactResolve.Artifact] = saml2AuthnResponse;`
EDIT: I have determined that the ArtifactResponse just isn't signed properly. Another tool claims the digest in the XML doesn't match the computed value. This is after stepping through the source and grabbing the XML that the code is trying to validate directly. I can see that the ArtifactResolve is being signed and validated properly (and I checked with the external tool) but the ArtifactResponse isn't. Even in the code it fails at the final validation of the signature (and not at any checks before it).
EDIT 2: Found the problem in the source. The .ToXmlDocument() extension is breaking the signed XML. The final test was done by 'replacing' it in the spot with a new method that just returns the string directly with "envelope.ToString(SaveOptions.DisableFormatting)":
protected virtual XmlDocument ToSoapXml()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToXmlDocument();
}
protected string ToSoapXmlString()
{
var envelope = new XElement(Saml2Constants.SoapEnvironmentNamespaceX + Saml2Constants.Message.Envelope);
envelope.Add(GetXContent());
return envelope.ToString(SaveOptions.DisableFormatting);//.ToXmlDocument();
}
And directly save that to the SoapResponseXml of the Saml2SoapEnvelope:
protected override Saml2SoapEnvelope BindInternal(Saml2Request saml2Request, string messageName)
{
if (!(saml2Request is Saml2ArtifactResponse))
throw new ArgumentException("Only Saml2ArtifactResponse is supported");
BindInternal(saml2Request);
SoapResponseXml = ToSoapXmlString();// ToSoapXml().OuterXml;
return this;
}
I would initiate a pull request for this change but honestly I'm not that up to speed with Git. I'm also not sure if this is the best way to fix the issue.
Thank you for your question and code to solve the problem. I'll look into the problem.
EDIT: I'm trying to reproduce the error but no luck. The sample is both an IdP an RP, what have you changed to get the error?
For web browsers (such as Chrome, IE, or Firefox), when you select a file using a "Choose File" button, where does the file's data get stored?
The file name shows in the browser, but does the data of the file get stored anywhere or is just a link to the file put somewhere, such as in the browser or a temporary file?
To clarify: I want to know where the file's data get's stored BEFORE submitting. JUST after the file is selected from the client's PC an not anything else is done.
After you select a file. I believe the client (browser) just stores a reference to the file location on the user's computer. It takes a combination of js and html to post the file to the server. Via a Multi/form-data Post.
In this case, on the server, you may have to store the file to a temp location of your choosing, until you're able to process it (i.e. transform and/or store to a Datastore).
In newer browsers you can use the FormData object and xhr to post to the server which is a lot cleaner.
This FormData object is used to construct the key/value pairs which form the data payload for the xhr request.
// Create a new FormData object.
var formData = new FormData();
In this case, once the file bytes are posted to the server, you can do whatever you want with that data. Typically I'll store it as a blob in the DB.
This approach will allow you to keep it all in memory. People make the mistake of trying to store on the server file system. In some multipart form post, you might have to do it this way, however.
Here's some of my web api upload code when using XHR.
I've also called this API route using an iframe (ugh!) in order to support IE8 and older. POS browsers!
/// <summary>
/// Upload the facility logo.
/// </summary>
/// <returns></returns>
[HttpPost]
[Route("logo")]
public HttpResponseMessage Logo()
{
int newImageId = -1;
var uploadedFiles = HttpContext.Current.Request.Files;
if (uploadedFiles.Count > 0)
{
var file = uploadedFiles[0];
if (!file.IsImage())
{
// "The uploaded file must be a .jpg, .jpeg, or .png"
return
Request.CreateResponse(HttpStatusCode.UnsupportedMediaType,
"unsupported");
}
var facilityRepository = new FacilityRepository();
var logoBytes =
StreamCopier.StreamToByteArray(file.InputStream, file.ContentLength);
newImageId = facilityRepository.InsertLogoImage(logoBytes);
}
return Request.CreateResponse(HttpStatusCode.OK,
newImageId);
}
When uploading to GCS (Google Cloud Storage) using the BlobStore's createUploadURL function, I can provide a callback together with header data that will be POSTed to the callback URL.
There doesn't seem to be a way to do that with GCS's signed URL's
I know there is Object Change Notification but that won't allow the user to provide upload specific information in the header of a POST, the way it is possible with createUploadURL's callback.
My feeling is, if createUploadURL can do it, there must be a way to do it with signed URL's, but I can't find any documentation on it. I was wondering if anyone may know how createUploadURL achieves that callback calling behavior.
PS: I'm trying to move away from createUploadURL because of the __BlobInfo__ entities it creates, which for my specific use case I do not need, and somehow seem to be indelible and are wasting storage space.
Update: It worked! Here is how:
Short Answer: It cannot be done with PUT, but can be done with POST
Long Answer:
If you look at the signed-URL page, in front of HTTP_Verb, under Description, there is a subtle note that this page is only relevant to GET, HEAD, PUT, and DELETE, but POST is a completely different game. I had missed this, but it turned out to be very important.
There is a whole page of HTTP Headers that does not list an important header that can be used with POST; that header is success_action_redirect, as voscausa correctly answered.
In the POST page Google "strongly recommends" using PUT, unless dealing with form data. However, POST has a few nice features that PUT does not have. They may worry that POST gives us too many strings to hang ourselves with.
But I'd say it is totally worth dropping createUploadURL, and writing your own code to redirect to a callback. Here is how:
Code:
If you are working in Python voscausa's code is very helpful.
I'm using apejs to write javascript in a Java app, so my code looks like this:
var exp = new Date()
exp.setTime(exp.getTime() + 1000 * 60 * 100); //100 minutes
json['GoogleAccessId'] = String(appIdentity.getServiceAccountName())
json['key'] = keyGenerator()
json['bucket'] = bucket
json['Expires'] = exp.toISOString();
json['success_action_redirect'] = "https://" + request.getServerName() + "/test2/";
json['uri'] = 'https://' + bucket + '.storage.googleapis.com/';
var policy = {'expiration': json.Expires
, 'conditions': [
["starts-with", "$key", json.key],
{'Expires': json.Expires},
{'bucket': json.bucket},
{"success_action_redirect": json.success_action_redirect}
]
};
var plain = StringToBytes(JSON.stringify(policy))
json['policy'] = String(Base64.encodeBase64String(plain))
var result = appIdentity.signForApp(Base64.encodeBase64(plain, false));
json['signature'] = String(Base64.encodeBase64String(result.getSignature()))
The code above first provides the relevant fields.
Then creates a policy object. Then it stringify's the object and converts it into a byte array (you can use .getBytes in Java. I had to write a function for javascript).
A base64 encoded version of this array, populates the policy field.
Then it is signed using the appidentity package. Finally the signature is base64 encoded, and we are done.
On the client side, all members of the json object will be added to the Form, except the uri which is the form's address.
var formData = new FormData(document.forms.namedItem('upload'));
var blob = new Blob([thedata], {type: 'application/json'})
var keys = ['GoogleAccessId', 'key', 'bucket', 'Expires', 'success_action_redirect', 'policy', 'signature']
for(field in keys)
formData.append(keys[field], url[keys[field]])
formData.append('file', blob)
var rest = new XMLHttpRequest();
rest.open('POST', url.uri)
rest.onload = callback_function
rest.send(formData)
If you do not provide a redirect, the response status will be 204 for success. But if you do redirect, the status will be 200. If you got 403 or 400 something about the signature or policy maybe wrong. Look at the responseText. If is often helpful.
A few things to note:
Both POST and PUT have a signature field, but these mean slightly different things. In case of POST, this is a signature of the policy.
PUT has a baseurl which contains the key (object name), but the URL used for POST may only include bucket name
PUT requires expiration as seconds from UNIX epoch, but POST wants it as an ISO string.
A PUT signature should be URL encoded (Java: by wrapping it with a URLEncoder.encode call). But for POST, Base64 encoding suffices.
By extension, for POST do Base64.encodeBase64String(result.getSignature()), and do not use the Base64.encodeBase64URLSafeString function
You cannot pass extra headers with the POST; only those listed in the POST page are allowed.
If you provide a URL for success_action_redirect, it will receive a GET with the key, bucket and eTag.
The other benefit of using POST is you can provide size limits. With PUT however, if a file breached your size restriction, you can only delete it after it was fully uploaded, even if it is multiple-tera-bytes.
What is wrong with createUploadURL?
The method above is a manual createUploadURL.
But:
You don't get those __BlobInfo__ objects which create many indexes and are indelible. This irritates me as it wastes a lot of space (which reminds me of a separate issue: issue 4231. Please go give it a star)
You can provide your own object name, which helps create folders in your bucket.
You can provide different expiration dates for each link.
For the very very few javascript app-engineers:
function StringToBytes(sz) {
map = function(x) {return x.charCodeAt(0)}
return sz.split('').map(map)
}
You can include succes_action_redirect in a policy document when you use GCS post object.
Docs here: Docs: https://cloud.google.com/storage/docs/xml-api/post-object
Python example here: https://github.com/voscausa/appengine-gcs-upload
Example callback result:
def ok(self):
""" GCS upload success callback """
logging.debug('GCS upload result : %s' % self.request.query_string)
bucket = self.request.get('bucket', default_value='')
key = self.request.get('key', default_value='')
key_parts = key.rsplit('/', 1)
folder = key_parts[0] if len(key_parts) > 1 else None
A solution I am using is to turn on Object Changed Notifications. Any time an object is added, a Post is sent to a URL - in my case - a servlet in my project.
In the doPost() I get all info of objected added to GCS and from there, I can do whatever.
This worked great in my App Engine project.
I recently started using bottle and GAE blobstore and while I can upload the files to the blobstore I cannot seem to find a way to download them from the store.
I followed the examples from the documentation but was only successful on the uploading part. I cannot integrate the example in my app since I'm using a different framework from webapp/2.
How would I go about creating an upload handler and download handler so that I can get the key of the uploaded blob and store it in my data model and use it later in the download handler?
I tried using the BlobInfo.all() to create a query the blobstore but I'm not able to get the key name field value of the entity.
This is my first interaction with the blobstore so I wouldn't mind advice on a better approach to the problem.
For serving a blob I would recommend you to look at the source code of the BlobstoreDownloadHandler. It should be easy to port it to bottle, since there's nothing very specific about the framework.
Here is an example on how to use BlobInfo.all():
for info in blobstore.BlobInfo.all():
self.response.out.write('Name:%s Key: %s Size:%s Creation:%s ContentType:%s<br>' % (info.filename, info.key(), info.size, info.creation, info.content_type))
for downloads you only really need to generate a response that includes the header "X-AppEngine-BlobKey:[your blob_key]" along with everything else you need like a Content-Disposition header if desired. or if it's an image you should probably just use the high performance image serving api, generate a url and redirect to it.... done
for uploads, besides writing a handler for appengine to call once the upload is safely in blobstore (that's in the docs)
You need a way to find the blob info in the incoming request. I have no idea what the request looks like in bottle. The Blobstoreuploadhandler has a get_uploads method and there's really no reason it needs to be an instance method as far as I can tell. So here's an example generic implementation of it that expects a webob request. For bottle you would need to write something similar that is compatible with bottles request object.
def get_uploads(request, field_name=None):
"""Get uploads for this request.
Args:
field_name: Only select uploads that were sent as a specific field.
populate_post: Add the non blob fields to request.POST
Returns:
A list of BlobInfo records corresponding to each upload.
Empty list if there are no blob-info records for field_name.
stolen from the SDK since they only provide a way to get to this
crap through their crappy webapp framework
"""
if not getattr(request, "__uploads", None):
request.__uploads = {}
for key, value in request.params.items():
if isinstance(value, cgi.FieldStorage):
if 'blob-key' in value.type_options:
request.__uploads.setdefault(key, []).append(
blobstore.parse_blob_info(value))
if field_name:
try:
return list(request.__uploads[field_name])
except KeyError:
return []
else:
results = []
for uploads in request.__uploads.itervalues():
results += uploads
return results
For anyone looking for this answer in future, to do this you need bottle (d'oh!) and defnull's multipart module.
Since creating upload URLs is generally simple enough and as per GAE docs, I'll just cover the upload handler.
from bottle import request
from multipart import parse_options_header
from google.appengine.ext.blobstore import BlobInfo
def get_blob_info(field_name):
try:
field = request.files[field_name]
except KeyError:
# Maybe form isn't multipart or file wasn't uploaded, or some such error
return None
blob_data = parse_options_header(field.content_type)[1]
try:
return BlobInfo.get(blob_data['blob-key'])
except KeyError:
# Malformed request? Wrong field name?
return None
Sorry if there are any errors in the code, it's off the top of my head.
I am a new bie on GWT, I wrote an application on abc.com, I have another application i.e. xyz.com, xyz.com?id=1 provides me a data in json format, I was thinking to find a way that how to get that json file in abc.com via RPC call, because I have seen tutorials in which RPC calls are used to get data from its server. any help will be appreciated.
EDIT
I am trying to implement this in this StockWatcher tutorial
I changed my code slightly change to this
private static final String JSON_URL = "http://localhost/stockPrices.php?q=";
AND
private void refreshWatchList() {
if (stocks.size() == 0) {
return;
}
String url = JSON_URL;
// Append watch list stock symbols to query URL.
Iterator iter = stocks.iterator();
while (iter.hasNext()) {
url += iter.next();
if (iter.hasNext()) {
url += "+";
}
}
url = URL.encode(url);
MyJSONUtility.makeJSONRequest(url, new JSONHandler() {
#Override
public void handleJSON(JavaScriptObject obj) {
if (obj == null) {
displayError("Couldn't retrieve JSON");
return;
}
updateTable(asArrayOfStockData(obj));
}
});
}
before when I was requesting my url via RequestBuilder it was giving me an exception Couldn't retrieve JSON but now JSON is fetched and status code is 200 as I saw that in firebug but it is not updating on table. Kindly help me regarding this.
First, you need to understand the Same Origin Policy which explains how browsers implement a security model where JavaScript code running on a web page may not interact with any resource not originating from the same web site.
While GWT's HTTP client and RPC call can only fetch data from the same site where your application was loaded, you can get data from another server if it returns json in the right format. You must be interacting with a JSON service that can invoke user defined callback functions with the JSON data as argument.
Second, see How to Fetch JSON DATA