I obtained a upload uRL Programmatically from my deployed google app., however i keep getting the below response...what type of parameter am i missing?
Bad content type. Please use multipart.
2016-08-16 09:25:34 ERROR PayitWebClientApp:207 - 400 Bad Request
Bad content type. Please use multipart.
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1054)
at com.vanitysoft.payit.PayitInvokeAuthApp.generateUploadUr(PayitInvokeAuthApp.java:198)
at com.vanitysoft.payit.PayitInvokeAuthApp.main(PayitInvokeAuthApp.java:217)
This is the code:
MultipartContent content = new MultipartContent().setMediaType(
new HttpMediaType("multipart/form-data")
.setParameter("boundary", "__END_OF_PART__"));
FileContent fileContent = new FileContent(
"multipart/related", file);
MultipartContent.Part part = new MultipartContent.Part(fileContent);
part.setHeaders(new HttpHeaders().set(
"Content-Disposition",
String.format("form-data; name=\"content\"; filename=\"%s\"", file.getName())));
content.addPart(part);
httpRequest = httpRequestFactory.buildPostRequest(new GenericUrl(uploadItem.getUploadUrl()) ,fileContent);
LOGGER.info("upload...[" + uploadItem.getUploadUrl() +"]");
httpResponse = httpRequest.execute();
LOGGER.info("Upload complete");
Assert.isTrue(httpResponse.getStatusCode() == 200, IOUtils.toString(httpResponse.getContent()));
LOGGER.info( "Success[" + httpResponse.getContent() +"]" );
You are missing enctype="multipart/form-data" on <form> tag.
Check this answer: Why is form enctype=multipart/form-data required when uploading a file?
Opps, typo:, i needed to use 'content' not fileContent
httpRequest = httpRequestFactory.buildPostRequest(new GenericUrl(uploadItem.getUploadUrl()) ,content);
Related
I've built a React frontend along with a Rails API only backend. I want to allow the user to create a task and enter a title, description and upload an image.
So I've attempted to use DropZone to get access to the image and then send the image info along with the title and description to my Rails API via a post request using Axios.
I set up Carrierwave on my Rails API in hopes of uploading to an AWS S3 bucket once my Task has been added to the database per the post request.
None of this is working so my question is, should I take care of the image uploading to AWS on the react side and if so, how do I associate that image with the additional information I'm saving to my Rails database (title and description).
Thanks!
First, on React side, there should be no proble with title and description, but for image, you need to encode the image to Base64 string. It is something like this.
getBase64 = (callback) => {
const fileReader = new FileReader();
fileReader.onload = () => {
console.log(fileReader.result);
};
fileReader.readAsDataURL(fileToLoad);
fileReader.onerror = (error) => {
console.log('Error :', error);
};
}
Then, on Axios, send those 3 parameters alltogether with one POST request.
For Rails, you need to set up code that can read the Base64 string. Usually, you can use Paperclip or CarrierWavegem to add image attachment. It will look like this.
property_image = listing.property_images.new(param_image)
if param_image[:file_data]
image_file = Paperclip.io_adapters.for(param_image[:file_data])
image_file.original_filename = param_image[:image_file_name]
image_file.content_type = "image/png"
property_image.image = image_file
end
private
def param_image
params.permit(:image, :image_file_name, :file_data)
end
I have a Web API that I want to return a file and the file details.
HttpResponseMessage result = Request.CreateResponse(HttpStatusCode.OK);
var file = new FileStream(filePath, FileMode.Open, FileAccess.Read);
result.Content = new StreamContent(file);
result.Headers.Add("filename", "MyFile");
return result;
Within Angular I do the following:
$http.get(url).then(function(response) {
console.log(response.headers());
});
The response.headers() does not contain my header record.
What am I missing?
I've looked at other examples online and they are like this.
I needed to add the following line and change the name to x-filename
result.Headers.Add("Access-Control-Expose-Headers", "x-filename");
I am trying to create a signed URL and upload files from my PC to google cloud storage using it.
I am using Advanced REST Client(ARC) as the client side application. On the server side, I have a jersey based server running on Appengine.
I first send a GET request from ARC, on receiving which the app engine generates a signed URL and returns it back in the response.
After that I do a PUT request with the file I want to upload in the body and the request URL set to what was received in the response to GET.
The code snippet to create signed URL:
String encodedUrl = null;
String contentMD5 = "";
String contentType = "";
String httpVerb;
httpVerb = "PUT";
Calendar calendar = Calendar.getInstance();
calendar.add(Calendar.MINUTE, 10);
long expiration = calendar.getTimeInMillis() / 1000L;
String canonicalizedResource = "/" + bucketName + "/" + objectName;
String baseURL = "https://storage.googleapis.com" + canonicalizedResource;
String stringToSign =
httpVerb + "\n" + contentMD5 + "\n" + contentType + "\n" + expiration + "\n"
+ canonicalizedResource;
AppIdentityService service = AppIdentityServiceFactory.getAppIdentityService();
String googleAccessId = service.getServiceAccountName();
SigningResult signingResult = service.signForApp(stringToSign.getBytes());
String encodedSignature = null;
try {
encodedSignature =
new String(Base64.encodeBase64(signingResult.getSignature(), false), "UTF-8");
} catch (UnsupportedEncodingException e) {
throw new InternalServerErrorException();
}
String signature = null;
try {
signature = URLEncoder.encode(encodedSignature, "UTF-8").toString();
} catch (UnsupportedEncodingException e) {
throw new InternalServerErrorException();
}
encodedUrl =
baseURL + "?GoogleAccessId=" + googleAccessId + "&Expires=" + expiration
+ "&Signature=" + signature;
System.out.println("Signed URL is: "+encodedUrl);
However I observe the following issue:
Whenever I send the PUT request with any file type, I get the following error:
Error - 403
Code - SignatureDoesNotMatch
Message - The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method
Please note that in my code, I am setting the content Type to "" while creating the string to sign. Also while creating the PUT request I don't include any Content-type header.
As far as I understand, if I don't include the contentType in the stringToSign while creating the signed URL and also not add it as a header while sending PUT request it should be fine. So what could be the reason for the error?
After that I changed by code and added the contentType while creating the stringToSign in the code and also gave the corresponding Content-Type header while sending the PUT request.
In this case I am able to upload the file, however the uploaded file is modified/corrupted.I tried with text/plain and image/jpeg.
The problem is that the following text is added at the beginning of the file:
------WebKitFormBoundaryZX8rPPhnm1WXPrUf
Content-Disposition: form-data; name="fileUpload5"; filename="blob"
Content-Type: text/plain
I can see this in the text file and on opening the .jpg file in the hex editor. The .jpg does not open in standard image application since the file has been corrupted by the text in the beginning
Am I missing something here? Is this any issue in the Advanced REST Client?
Actually whenever I send a PUT request with some file in the body, I get a message in the ARC saying that :
The content-type header will be finally changed to multipart/form-data while sending the request
However, I saved exported all the messages to a file from ARC and I didn't find any message with Content-type header set to multipart/form-data.
So why does this message come and is it actually an issue?
URL-signing code is tricky and notoriously hard to debug. Fortunately, Google's google-cloud library has a signUrl function that takes care of this for you. I highly encourage you to use it instead of rewriting it yourself. Here's the documentation.
Now, if you want to debug it yourself, checking the error message is super useful. It will include a complete copy of the string the server checked the signature of. Print out your stringToSign variable and see how it's different. That'll tell you what's wrong.
Now, on to your specific problem: it sounds like you are generating an acceptable signed URL, but then your client is attempting to upload to GCS as if it were doing a multipart, form upload. The text you're looking at is part of an HTTP multipart request, and the "multipart/form-data" warning also points in that direction. See if the app you're using has some sort of "Form" mode/option that you are perhaps accidentally using?
I am generating in server side a pre-signed URL request with the following parameters for GeneratePresignedUrlRequest : bucket, key, expiration = in 1 hour and method = PUT.
In my Angular app, I am uploading the file using ng-file-upload
Upload.http({
url: $scope.signedUrl,
method: "PUT",
headers : {
'Content-Type': $scope.file.type
},
data: $scope.file
});
The problem is that I always have a 403 response unless I set the type of the file in GeneratePresignedUrlRequest.contentType.
The problem is that I can't predict in advance what type of file the user will choose (image/png, image/jpeg, text/plain...).
How can I generate a pre-signed url that accept all kinds of content-type ? I tried setting it to null, it keeps sending 403 errors.
Thanks.
I just ran into this problem, and just got it working. Replace your Upload.http code with the following:
var reader = new FileReader();
var xhr = new XMLHttpRequest();
xhr.open("PUT", $scope.signedUrl);
reader.onload = function(evt) {
xhr.send(evt.target.result);
};
reader.readAsArrayBuffer($scope.file);
The problem ends up being that S3 is looking for a specific Content-Type (binary/octet-stream), which it infers when you omit the Content-Type header.
The value from the Content-Type header is a mandatory component of the signature. It isn't possible to pre-sign a PUT URL without knowing the value that will be sent.
A POST upload is more flexible, since you can allow any Content-Type in the signed policy.
One possible solution might be if you keep track of the extension?
eg: ends with ".jpg" -> content type = "image/jpeg", end with ".zip" -> content type = "application/octet-stream".
Ref: get the filename of a fileupload in a document through javascript
I was wondering if anything had changed recently in relation to uploading blobs to appengine from external applications? What used to work perfectly only 3 months ago is now hanging when doing a http post to upload the blob.
The code (see below) which was working fine previously consists in fetching a pull queue from AppEngine (using the REST API), doing some stuff with this task received and then uploading back the result as a Blob on AppEngine. The url to upload the blob to is created by appengine using blobstoreService.createUploadUrl("/upload");
and is of the form:
http://myapp.appspot.com/_ah/upload/AMmfu6aAHnkuS4ngyRJDn7urFFZeBxb_-3P-r7RY9udMvRjLWkEZNJMgUX1DFczNVi-NhIxcFat2AEPXs2IRJ0AOmznSMgcrCKmL7mGAmS7nqtr-UyYFkglD88BwCfzIui9M2yez7DSQ/ALBNUaYAAAAAUGRlEwpeGEc5ozp8Z8sDO33qgCi2AiIE/
I had a look at the logs on AppEngine and it seems like the servlet in charge of /upload isn't being triggered.
I'm honestly out of ideas at this stage, any help would be greatly appreciated ! :-)
Cheers,
Killian
public boolean uploadAsBlob(String dataToWrite, String uploadURL) {
try {
BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(tempFileLocation));
bufferedWriter.write(dataToWrite);
bufferedWriter.newLine();
bufferedWriter.close();
MultipartEntity entity = new MultipartEntity();
entity.addPart(blobFileName, new FileBody(new File(tempFileLocation)));
HttpPost method = new HttpPost(uploadURL);
method.setEntity(entity);
final HttpParams httpParams = new BasicHttpParams();
HttpConnectionParams.setConnectionTimeout(httpParams, 10000);
DefaultHttpClient httpclient = new DefaultHttpClient(httpParams);
//It hangs at the following line!
HttpResponse response = httpclient.execute(method);
if (response.getStatusLine().getStatusCode() == 200) {
logger.info("Uploaded blob to url: " + uploadURL);
return true;
} else {
logger.warning("Couldn't upload blob to url: " + uploadURL);
}
} catch (Exception e) {
logger.warning("Exception " + e.getMessage() + " occured while uploading blob to url:" + uploadURL);
logger.warning("Couldn't upload blob to url: " + uploadURL);
}
return false;
}
I have found that GAE has recently started to keep any GET parameters when invoking blobstoreService.createUploadUrl(). In my case:
http://www.myapp.com/BG?_=1354631578951
With this (unexpected) parameter, the created URL was:
http://www.myapp.com/_ah/upload/?_=1354631578951/AMmfu6YgVPoJzWXdbf70k6J0zdjEeRnnRJ2PYCb3Jgdwk3SqmKEnFyKgy_17CKwiqbC2HyO-FlPVX-C53W0LjHSywaq7YmLegD97uU-GrpWRdBdWbfKf0Dk/ALBNUaYAAAAAUL4L8iDS5E99f3Wky2p59wWpCD84AqoP/
Notice that the '_' parameter is still there. Removing the parameter (or maybe moving from GET to POST) fixed the problem.